Tuesday, February 25, 2014

If you know what you're doing, you're doing the wrong thing.

I read something to this effect, but I can't find the source: "In software engineering, the more you know what it is that you're doing, the more likely it is that you're doing the wrong thing." I plan to come back to this theme, so I'd like to expand on it. The way I see it, this argument is about a couple of (related) things:

  • Encouraging individual growth, and
  • Avoiding obsolescence.

I'd like to end this essay on a high note, so let me start with the more negative reading:

Avoiding Obsolescence

I agree with Marc Andreessen: Software is eating the world. (Aside: when virtual reality becomes commonplace, we will be able to say "software has eaten the world".) This is happening because of a feedback loop. If we start with the idea that software is an operational representation of knowledge, then:

  1. As more information becomes software accessible, more knowledge that acts on that information can be operationalized as software.
  2. Sometimes, knowledge will result in the production of new information.
  3. When this is the knowledge being operationalized as software, this makes the new information produced accessible to software.

By implication, if you have high explicit knowledge about how to do your work, then you know that this knowledge is now, or soon will be, possible to operationalize as software. At which point, you won't be needed to do it any more. Worse, if you don't have high explicit knowledge about how to do your work (likely because the knowledge is tacit), that does not mean that no one else does. And if someone else knows how to operationalize what you do as software, you still won't be needed to do it any more.

This seems to be a negative reading, but it doesn't have to be. The tasks about which we tend to have the highest explicit knowledge will be those tasks that are closest to drudgery. Automating that part of our work frees us to focus on the more interesting aspects of our work. It can dramatically increase our potential as individuals. Which brings us to this point:

Encouraging personal growth

If you know how to automate what you're doing, then automate it, and work on the part you don't know how to automate. If someone else knows how to automate what you're doing, try to use what they've done, and work on some other aspect. Orient your career away from rote work. If you want to do knowledge work, then the best way to be confident that what you're doing can't be automated is to work at some frontier of knowledge: your job should involve the production of new knowledge, since this new knowledge will not have been operationalized as software yet. In other words, the best way to avoid obsolescence in knowledge work involves continuous growth.

Software is increasing your potential. But it cannot fulfill your potential for you. Meeting your potential is hard. Growth requires that you push against boundaries that you don't yet understand. It will involve testing your theories, and re-evaluating them when evidence does not agree with their predictions. At the frontiers of knowledge, your initial ideas will usually fall short. If none of your ideas fail, at least to some extent, then you probably know what you're doing, and you more than likely aren't particularly close to a knowledge frontier. To be at a frontier of knowledge will mean that you won't always know what you're doing, and you will have individual failures. But when the right lessons are learned from failure, it can lead to new insight, and it is from insight that a knowledge worker provides the greatest value.

Fear of failure is ultimately much worse to a knowledge worker than failure is: you can learn from failure. Fear of failure will ultimately lead to intellectual stagnation, and then obsolescence. "If you know what you're doing, you're doing the wrong thing" is an encouraging statement: it helps you accept that you don't completely understand what you're doing, yet. If you're figuring it out, though, you're growing closer to your potential.

Tuesday, February 18, 2014

The Paradox of Architecture

The definition of software architecture I use most often is Chris Verhoen's, which can be paraphrased as "that which is expensive to change." Those aspects of the system that are most expensive to change are also the ones that we most want to get right. In fact, I would argue that a software architect's value to a project is manifest whenever she avoids an expensive wrong decision that would have been made in her absence. It then follows that the role of a software architect is to avoid expensive mistakes. (Note, though, that inaction will also generally be a mistake.)

You might notice that this is a negative characterization of an architect's role: an architect's role is defined by what she does not do (make expensive mistakes), rather than by what she does. I'll come back to this later, after making another observation about the above definition of "architecture".

Defining architecture as "that which is expensive to change" provides an operational basis for determining if an aspect of the system is architectural. That is, given an aspect of the system, you can ask "how expensive would it be to change this?" to determine if that aspect is architectural: if it's expensive to change, then it's architectural; if it isn't, then it's not.

And this brings us to the title of this little essay, something I'll call the Paradox of Architecture. That is:

A large part of an architect's job is to avoid the introduction of architecture to a system.

I know this sounds completely backwards, and yet I believe it's true - the architect is responsible for preventing aspects of the design from becoming architectural. Consider the canonical example of a system designed without architecture, Foote and Yoder's Big Ball of Mud:

A BIG BALL OF MUD is haphazardly structured, sprawling, sloppy, duct-tape and bailing wire, spaghetti code jungle. We’ve all seen them. These systems show unmistakable signs of unregulated growth, and repeated, expedient repair. Information is shared promiscuously among distant elements of the system, often to the point where nearly all the important information becomes global or duplicated. The overall structure of the system may never have been well defined. If it was, it may have eroded beyond recognition.

A defining characteristic of these systems is that they are extremely difficult to maintain. Every aspect of the system is difficult (read: "expensive") to change. As such, every aspect of the system is architectural. Nothing is a simple "implementation detail", because no detail can be examined in isolation from the whole system. The whole system must be understood for every change. A Big Ball of Mud is nothing but architecture, and it is un-maintainable for that very reason.

On the other hand, consider the traditional Unix architecture. Doug McIlroy summarized the underlying philosophy beautifully as:

Write programs that do one thing and do it well. Write programs to work together. Write programs to handle text streams, because that is a universal interface.

(source: http://en.wikipedia.org/wiki/Unix_philosophy.)

Parts of the Unix architecture (like pipes, and untyped storage ("everything is a file")) are oriented around serving this philosophy. Nothing that did not serve this philosophy was included, leaving Unix with a limited set of primitive operations. The expressive power of this limited set is what makes us consider Unix to be a well-architected system. The rest of a Unix system (which comprises its bulk) is much easier to change, because the architectural components are small and isolated. Most of Unix's value is not in its architecture, which is what makes its architecture so good.