Friday, March 31, 2017

Documenting With Types

I've said this before: elegant code is pedagogical. That is, elegant code is designed to teach its readers about the concepts and relationships in the problem domain that the code addresses, with as little noise as possible. I think types are a fundamental tool for teaching readers about the domain that code addresses, but heavy use of types tends to introduce noise into the code base. Still, types are often misunderstood, and, as such, tend to be under-utilized, or otherwise misused.

Types are means of describing to the compiler what operations are valid against data. Different programming languages will have their own concept of data type and their own means of defining them. However, the type system in every language will have these two responsibilities: to allow the programmer to define what operations are valid against typed data, and to indicate failure when the source code contains a prohibited operation. (Of course, type definitions in most languages do more than this, such as also defining memory organization for typed data. These details are critical for translating source code into an executable form, but are not critical to the type-checker.)

The strength of a programming language's type system is related to how much it helps or hinders your efforts to discourage client error. It's my general thesis that the most important kind of help is when your types correspond to distinct domain concepts, simultaneously teaching your users about these concepts and their possible interactions, and discouraging the creation of nonsensical constructs. But this doesn't usually come for free. I won't go further into this here, but some hindrances are:

  • Error messages that don't help users understand their mistakes.
  • Excessively verbose type signatures.
  • Difficulty of representing correctness constraints.
  • More abstraction than your users will want to process.

Your types should pay for themselves, by some combination of keeping these costs low, of preventing catastrophic failure, or of being an effective teacher of your domain concepts.1

How does one document code with types? I have a few guidelines, but this blog will run long if I go into all of them here, so I'll start with one:

Primitive types are rarely domain types.

Consider integers. A domain concept may be stored in an integer, but this integer will usually have some unit bound to it, like "10 seconds", or "148 bytes". It almost certainly does not make domain sense to treat a value you got back from time(2) like a value you got back from ftell(3), so these could be different types, and the type could be used to prevent misuse. Depending on language, I might even do so, but consider the options:

In C, you can use typedef to create a type alias, as POSIX does. This serves to document that different integers may be used differently, but does not actually prevent mis-use:

#include <inttypes.h>
typedef uint64_t my_offset_t;
typedef int64_t my_time_t;

void ex1() {
  my_offset_t a = 0;
  /* variable misuse, but not an error in C's type system. */
  my_time_t b = a;
}

You could use a struct to create a new type, but these are awkward to work with:

#include <inttypes.h>
#include <stdio.h>
typedef struct { uint64_t val; } my_offset_t;
typedef struct { int64_t val; } my_time_t;

void my_func() {
  my_offset_t offset_a = { .val=0 };
  my_offset_t offset_b = { .val=1 };
  my_time_t time_c = { .val=2 };

  /*
   * variable misuse is a compile-time error:
   *   time_c = offset_a;
   * does not compile.
   */

  /*
   * cannot directly use integer operations:
   *   if (offset_b > offset_a) { printf("offset_b > offset_a\n"); }
   * does not compile, but can use:
   */

  if (offset_b.val > offset_a.val) { printf("offset_b.val > offset_a.val\n"); }

  /*
   * but the technique of reaching directly into the structure to use
   * integer operations also allows:
   */

  if (time_c.val > offset_a.val) { printf("BAD\n"); }

  /*
   * which is a domain type error, but not a language type error
   * (though it may generate a warning for a signed / unsigned
   * comparison). One could define a suite of functions against the
   * new types, such as:
   *   int64_t compare_offsets(restrict my_offset_t *a, restrict my_offset_t *b)
   *   {
   *     return (int64_t) a->val - (int64_t) b->val;
   *   }
   * and then one could use the more type-safe code:
   *   if (compare_offsets(&offset_a, &offset_b) > 0) {
   *     printf("GOOD\n");
   *   }
   * but, in no particular order: this isn't idiomatic, so it's more
   * confusing to new maintainers; even once you're used to the new
   * functions, it's not as readable as idiomatic code; depending on
   * optimization and inlining, it's plausibly less efficient than
   * idiomatic code; and it is awkward and annoying to define new
   * functions to replace the built-in integer operations we'd like
   * to use.
   */
}

As far as I can tell, C does not provide ergonomic options for using the type system to prevent integer type confusion. That said, the likelihood of user error in this example (misusing a time as a file size, or vice versa) is pretty low, so I would probably make the same choice that POSIX did in this circumstance, and just use type aliases to document that the types are different, and give up on actually preventing mis-use.

On the other hand, at Circonus we maintain a time-series database that must deal with time at multiple resolutions, each represented as 64-bit integers: Aggregate data storage uses units of seconds-since-unix-epoch, while high-resolution data storage uses units of milliseconds-since-unix-epoch. In this case, the likelihood of user error working with these different views of time is very high (we have a number of places where we need to convert between these views, and have even needed to change some code from using time-in-seconds to using time-in-milliseconds). Furthermore, making mistakes would probably result in presenting the wrong data to the user (not something you want in a database), or possibly worse.

If we were strictly using C, I would probably want to follow the approach Joel Spolsky outlined here, and use a form of Hungarian notation to represent the different views of time. As it happens, we're using C++ in this part of the code base, so we can use the type system to enforce correctness. We have an internal proscription against using the STL (to keep our deployed code relatively trace-able with, say, dtrace), so std::chrono is out. But we can define our own types for working with these different views of time. We start by creating our own [strong_typedef][strong_typedef] facility (no, we don't use BOOST, either):

#define ALWAYS_INLINE __attribute__((always_inline))

// bare-bones newtype facility, intended to wrap primitive types (like
// `int` or `char *`), imposes no run-time overhead.
template <typename oldtype, typename uniquify>
  class primitive_newtype
{
public:
  typedef oldtype oldtype_t;
  typedef primitive_newtype<oldtype, uniquify> self_t;

  primitive_newtype(oldtype val) : m_val(val) {}
  ALWAYS_INLINE oldtype_t to_oldtype() { return m_val; }
private:
  oldtype m_val;
};

With this facility, we can define domain types that are incompatible, but which share a representation, and which should impose no overhead over using the primitive types:

class _uniquify_s_t;
typedef primitive_newtype<int64_t, _uniquify_s_t> seconds_t;
class _uniquify_ms_t;
typedef primitive_newtype<int64_t, _uniquify_ms_t> milliseconds_t;

Or, better, since time types all share similar operations, we can define the types together with their operations, and also split up the types for "time point" from "time duration", while enforcing a constraint that you can't add two time points together:

template <class uniquify>
  class my_time_t
{
private:
  class _uniquify_point_t;
  class _uniquify_diff_t;
public:
  typedef primitive_newtype<int64_t, _uniquify_point_t> point_t;
  typedef primitive_newtype<int64_t, _uniquify_diff_t> diff_t;
  static ALWAYS_INLINE point_t add(point_t a, diff_t b)
  {
    return point_t(a.to_oldtype() + b.to_oldtype());
  }
  static ALWAYS_INLINE diff_t diff(point_t a, point_t b)
  {
    return diff_t(a.to_oldtype() - b.to_oldtype());
  }
  static ALWAYS_INLINE point_t diff(point_t a, diff_t b)
  {
    return point_t(a.to_oldtype() - b.to_oldtype());
  }
  static ALWAYS_INLINE diff_t diff(diff_t a, diff_t b)
  {
    return diff_t(a.to_oldtype() - b.to_oldtype());
  }
  static ALWAYS_INLINE diff_t add(diff_t a, diff_t b)
  {
    return diff_t(a.to_oldtype() + b.to_oldtype());
  }
};

class _millisecond_uniquify_t;
typedef my_time_t<_millisecond_uniquify_t> my_millisecond_t;
class _second_uniquify_t;
typedef my_time_t<_second_uniquify_t> my_second_t;

This is just the primitive basis of our time-management types, and (to help the example fit in a blog post, and because I write the blog to a different audience than I write production code) is implemented a little differently than what we actually have in our code base. With these new types, we can perform basic operations with time in seconds or milliseconds units, while preventing incorrect mixing of types: an attempt to take a difference between a time-point based in seconds and a time-point based in milliseconds, for example, will result in a compilation error. Using these facilities made translating one of our HTTP endpoints from operating against seconds to operating against milliseconds into an entirely mechanical process of converting one code location to use the new types, starting a build, getting a compilation error from a seconds / milliseconds mismatch, changing that location to use the new types, and repeating. This process was much less likely to result in errors than it would have been had we been using bare int64_t's everywhere, relying on code audit to try and ensure that everything that worked with the old units was correctly updated to use the new. These types are more annoying to work with than bare integers, but using them helped avoid introducing a very likely and very significant system problem under maintenance. The types paid for themselves.

(Thanks to Riley Berton for reviewing this post. A form of this post also appeared on the Circonus blog, here.)


  1. C++ is an interesting case of weighing costs and benefits, in that, while the benefits of using advanced C++ type facilities can be very high (bugs in C++ code can be catastrophic, and many of C++'s advanced facilities impose no runtime overhead), the maintenance costs can also be extremely high, especially when using advanced type facilities. I've seen thousands of characters of errors output due to a missing const in a type signature. This can be, ahem, intimidating.

Wednesday, December 7, 2016

Correctness and Maintainability

In my career, I've focused a lot on making my code bug-resistant, and I've spent a lot of time trying to learn or invent techniques that would systematically prevent the types of bugs I would run into the most frequently. I can think of three basic types of approach I've taken, which can be placed on a spectrum from more formal to less:

  • Exploiting type systems;
  • Defining code-based tests;
  • Simplifying reviewability or debugability.

The three approaches can be characterized based on what resources we use to we check that the code is correct. You can:

  • Use the compiler to verify correctness (via type-checking);
  • Use code to verify correctness (via testing assertions);
  • Use people to verify correctness (via code review or debugging).

And then you might observe how relatively likely it is that the verification missed a program bug:

  • The compiler will never pass a program that contains a type-checking error, though it's possible that the type-system is unsound (in which case a bug the type-system is supposed to prevent may still occur). In mature languages, such issues are either rare or well understood, so we can generally say that type-checking provides rigorous proof of program correctness -- inasmuch as program correctness is assessable by the type-system, but more on this later.
  • Testing will always detect code that fails the tests, but to be robust, tests must cover the entire interesting semantic range of the Unit Under Test, and this range can be extremely difficult to characterize. Designing for testability is largely about reducing the semantic range of our program units (such that meaningful tests can be written more easily), but even given that, it is difficult to determine the full semantic range of a system unit, and tests will never be absolute. Moreover, even if full semantic coverage were possible to determine, there will still be parts of all "interesting" systems where code-based tests are too difficult to write, too expensive to run, or where correctness is too difficult to determine (or otherwise too uncertain), to make testing worthwhile.
  • And, finally, when it comes to review by people, I'm not sure anything can be said with certainty about what bugs might be missed. Usability testing may require unvarnished input from neophytes, your professor may use rigorous methods to prove your program correct, you should try to know your audience.

And, by implication from the above, one can look at those three types of approach and determine how likely it is that the approach will prevent bugs during code maintenance:

  • The type system systematically prevents the introduction of type errors into the program.
  • Tests prevent bug regressions during maintenance, and following a test-oriented implementation methodology can strongly discourage the introduction of bugs into the system.
  • Techniques for improving readability weakly discourage the introduction of bugs into the system.

So, we should prefer to use the type system, and failing that use tests, and failing that just try to make our code clearer, right? Not exactly.

First of all, these three approaches are not exclusive: one can make advanced use of type systems, and write test code, and improve human reviewability, all at once. Indeed, using the type system, or writing unit tests can improve a human reader's ability to build a mental model of the system under analysis, improving solution elegance.

Secondly, in my experience, the human maintainer is always the most important consideration in software design. If your tests or your data types aren't maintainable, then they are technical debt of one form or another.1

And finally, different people will have different capabilities and preferences, and more robust approaches may be less maintainable in your developer community. OK, so maybe (and I'm not sure that this is true, but it seems plausible) Idris can codify the invariants required by a Red-Black Tree at the type-level, so that compilation would fail if the implementation contains bugs. This sounds really cool, and it makes me want to learn Idris, but Idris's type system is simply more difficult to learn than C#'s, or even Haskell's. If you stick to the "safe" subset of Rust, your code should not have use-after-free bugs or buffer overruns, but people that learn Rust invariably describe their early efforts as, essentially, fighting the borrow-checker. Even writing code for testability is an acquired skill, which takes a significant investment to learn, and your market window may not afford you the time to invest.

This is to say: for the most part, trying to improve correctness will also result in a more maintainable system. But not always. And the more advanced techniques you use in exploiting the type system, or even in designing your tests, the fewer developers will be able to work with your code. In the extreme case, you may get code that is proven correct by the compiler, but that is unmaintainable by humans. Know your audience, and be judicious in the techniques you apply.


  1. I use the term "technical debt" in the sense of something that will require extra work, later, to maintain. By this definition, paying down technical debt does not necessarily require code changes: the payment may be the time and effort taken training developers to work with the code in its existing form.

Wednesday, December 10, 2014

Bootstrapping Rust

There are two stories I particularly like about the problem of bootstrapping self-referential tools. This one:

Another way to show that LISP was neater than Turing machines was to write a universal LISP function and show that it is briefer and more comprehensible than the description of a universal Turing machine. This was the LISP function eval[e,a], which computes the value of a LISP expression e - the second argument a being a list of assignments of values to variables. (a is needed to make the recursion work). Writing eval required inventing a notation representing LISP functions as LISP data, and such a notation was devised for the purposes of the paper with no thought that it would be used to express LISP programs in practice. Logical completeness required that the notation used to express functions used as functional arguments be extended to provide for recursive functions, and the LABEL notation was invented by Nathaniel Rochester for that purpose. D.M.R. Park pointed out that LABEL was logically unnecessary since the result could be achieved using only LAMBDA - by a construction analogous to Church's Y-operator, albeit in a more complicated way.

S.R. Russell noticed that eval could serve as an interpreter for LISP, promptly hand coded it, and we now had a programming language with an interpreter.

And this one:

Then, one day, a student who had been left to sweep up after a particularly unsuccessful party found himself reasoning in this way: If, he thought to himself, such a machine is a virtual impossibility, it must have finite improbability. So all I have to do in order to make one is to work out how exactly improbable it is, feed that figure into the finite improbability generator, give it a fresh cup of really hot tea... and turn it on!

Rust is, I think, the programming language I've been waiting for: it takes a large proportion of the ideas I like from functional languages, applies them in a systems programming language context with 0-overhead abstractions, can run without a supporting runtime, and makes "ownership" and "lifetime" into first-class concepts. In my work trying to build highly complex and robust embedded systems, these are exactly the features I've been waiting for, and I'm thrilled to see a modern programming language take this application space seriously.

Building the language on a second-class platform, though, was intensely frustrating. The frustration came from several places, including that my Unix systems-administration skills have gotten (*ahem*) rusty, but I'd say the bulk of my difficulty came from a single factor: circular dependencies in the tooling.

  • Building the Rust compiler requires a working Rust compiler.
  • Building the Cargo package-manager requires a working Cargo package-manager.

These problems turned out to be ultimately surmountable: The rustc compiler had been previously built for a different version of FreeBSD, but once I re-compiled the required libraries on my version, I managed to get the pre-built compiler running. For the Cargo package manager, I ended up writing a hacky version of Cargo in Ruby, and using that to bootstrap the "true" build. I'm glossing over it here, but this turned out to be a lot of effort to create: Rust is evolving rapidly, and it was difficult to track new versions of Cargo (and its dependencies) that depended on new versions of the Rust compiler, while the Rust compiler itself sometimes did not work on my platform.

This was, necessarily, my first exposure to the platform in general, and it was, unfortunately, not a positive experience for me. Circular dependencies in developer tools are completely understandable, but there should be a way to break the cycle. Making bootstrap a first-order concern does not seem that difficult to me, in these cases, and would greatly enhance the portability of the tools (maybe even getting Rust to work on NetBSD, on GNU/Hurd, or on other neglected platforms):

  • The dependency of the compiler on itself can be broken by distributing the bootstrap compiler as LLVM IR code. Then use LLVM's IR assembler, and other tools, to re-link the stage0 Rust compiler on the target platform. The stage0 compiler could then be used to build the stage1 compiler, and the rest of the build could proceed as it does today. (Rust issue #19706)
  • The dependency of the package manager on itself can be broken by adding a package manager target to, say, tar up the source code of the package manager and its dependencies, along with a shell script of build commands necessary to link all that source code together to create the package manager executable. (Cargo issue #1026)

I am not suggesting that we should avoid circular dependencies in developer tools: Eating our own dogfood is an extremely important way in which our tools improve over time, and tools like this should be capable of self-hosting. But the more difficult it is to disentangle the circular dependency that results, the more difficult it becomes to bootstrap our tools, which will ultimately mean less people adopting our tools in new environments. Most of us aren't building Infinite Improbability Drives. Bootstrapping should be a first-order concern.

Monday, November 17, 2014

Links

Taking a cue from Julia Evans, linking this here to remind myself to re-read it occasionally:

John Allspaw's notes On Being a Senior Engineer

Sunday, November 9, 2014

Think Big, Act Small

There are, I think, two big dangers in designing software (which probably apply to other types of creative activity, as well):

  • No up-front design.
  • Requiring complete design to begin implementation.

I call these the two "big" dangers because there are strongly seductive aspects to each, and because each approach has been followed in ways that have driven projects into the ground. I think these approaches are followed because many project planners like to frame the plan in terms of "how much design is necessary before implementation starts?". When you ask the question this way, "no up-front design" and "no implementation without a coherent design" are the simplest possible answers, and both of these answers is reasonable in a way:

  • When avoiding up-front design, you can get straight to the implementation, allowing you to show results early, ultimately allowing you to get earlier feedback from your target audience about whether you're building the correct product.
  • When avoiding implementation before the design is fleshed out, you can be more confident that parts of the implementation will fit together as intended, that fewer parts of the system will require significant rework when requirements change late in the process, and it is much easier to involve more people in the implementation, since you should be able to rely more on the formal design documentation to allow work to coordinate.

(I could also make a list of the cons, but ultimately the cons of each approach are visible in the pros of the other. For example, by avoiding up-front design, it becomes much harder to scale the implementation effort to a larger group than, say, 10 or 15 people: The cost of producing formal specifications is high, but (relatively) fixed, while the cost of informal communications starts low, but increases with the square of the number of individuals in the group.)

As I write today, the dominant public thinking in the software developer community is broadly aligned against Big Design Up Front, and towards incremental, or emergent design. I generally share this bias: I consider software design to be the art of learning what the constructed system should look like, in enough detail that the system becomes computer operational (Design Is Learning), and I think learning is better facilitated when we have earlier opportunity for feedback from our design decisions. Further, if we make mistakes in our early design, it's much less expensive to fix those mistakes directly after making them than it is after they become ingrained in our resulting system. It's extremely important to get feedback about the suitability of our design decisions quickly after making them.

But the importance of early feedback does not reduce the importance of early big-picture thinking about the design. I work mostly in real-time embedded systems programming, and in this domain, you cannot ignore the structure of the execution path, or even of the data accessed, between event stimulus and time-critical response. Several operations would have been easier to implement had I ignored these real-time concerns, and the problems that would have resulted would have been invisible in early development (when the system wasn't as stressed, and real-time constraints were looser). On the other hand, we would not have been able to make that system work in that form: large amounts of code would have likely needed rewrite to meet higher load and tighter timing constraints as our system got closer to market. The extra effort put into being able to control the timing of specific parts of our execution path was critical to our system's ability to adapt to tightening real-time requirements.

Which all serves to introduce the core four words of this little essay: "Think Big, Act Small." In other words, consider the whole context of the design you are working on ("think big"), while making sure to frequently test that you are moving towards your goal ("act small"). So, if I'm working on a part of the system, I don't think only of that part, but also of all the other parts with which it will interact, of its current use and of its possible future uses, and of the constraints that this part of the system must operate under. (That is, I try to understand the part's whole context as I design and build it.) On the other hand, if I'm trying to design some large-scale feature, I think of how it breaks down into pieces, I try to think about which of these pieces I'm likely to need the soonest, what sorts of requirements changes are likely to occur, what parts of the break-down those changes are likely to affect the most, and, of the big design, how much work do we actually need to do to meet our immediate needs. (That is, I try to break down the big design into the smallest steps I can that will a) demonstrate progress towards the immediate goal, and b) be consistent with likely future changes.) By the time I actually begin to write the software code, I have probably thought about an order of magnitude larger portion of the system than what I will write to complete my immediate task.

This is hard work, and it can feel like waste to spend time designing software that, often, will never be written. Patience and nerves get worn out, trying to hold a large part of the system in my head at once before the design of a feature or subsystem finally gels and implementation can start. On the other hand, I've found the designs I've created in this way have tended to be much more stable, and maintenance on individual modules tends not to disturb other parts of the system (unless the maintenance task naturally touches that other part of the system as well). In other words, I feel these designs are very well factored. It takes a lot of effort to get there, but echoing an idea made famous by Eisenhower ("plans are worthless, but planning is everything"), in the end I would rather spend up-front time thinking about how to write code that will never need to be written, than spend time at the back-end thinking about how to re-write subsystems that will never meet their design objectives.

Think big: what is the whole context of the effort you are thinking of undertaking? Act small: how can you find out if the path you are on is correct, as early as possible? Know your objectives early, test your decisions early, and adapt to difficulties early, to achieve your goals.

(PS: for more on this theme, see here.)

Wednesday, August 20, 2014

Why, What, How?

I recently encountered an argument (Why Most Unit Testing is Waste) from Jim Coplien arguing forcefully against unit testing as it is generally practiced in the industry today. I generally like his thinking, but find I cannot agree with this thesis. I disagree with the thesis, because I think of unit tests as filling a different role than Mr. Coplien (and, I think, most others in our profession) think it fills. In order to say what I mean, I'll start by taking a step back.

At root, I think the design process for a piece of software consists of answering three questions:

  1. Why? What problem am I trying to solve, and why does this problem need a solution?
  2. What would the results be, if I had a solution to this problem?
  3. How could the solution work?

There is a natural order in which these questions are asked during design, and it is the same order that they are listed above: you should understand "why" you need a solution before you decide "what" your solution will do, and you should understand "what" your solution will do before you can decide "how" it will do it.

These questions will also often be asked hierarchically: the "why" answer characterizing why you create a product might result in a "how" answer characterizing the domain objects with which your product is concerned. But the answer to "how do I organize the concepts in my domain" is actually another "why" question: "why are these the correct concepts to model?". And this "why" question will lead to another "how" answer characterizing the roles and responsibilities of each domain object. And so on down, where "how" answers at one level of abstraction become "why" questions at the more concrete level, until one reaches implementation code, below which it is unnecessary to descend.

It's also notable that there is only one question in this list whose answer will be guaranteed to be visible in a design artifact, and will be guaranteed to be consistent with the execution model of the system: the question, "how does this work?", is ultimately answered in code. Neither of the other questions will necessarily be answered in a design artifact, and even if they are answered in a design artifact, it is likely that this artifact will become inconsistent with the design, over time, unless there is some force working against this. And as design artifacts grow stale, they become less useful. In the end (and again, in the absence of some force pulling in the other direction), the only documentation guaranteed to be useful in understanding a design is the code itself.

This is unfortunate. Because design (including implementation!) is a learning process, our understanding of why we make certain decisions can and will change significantly during design, almost guaranteeing significant drift between early design documentation and the system as built. If, in mitigating this problem, one relies primarily on the code for design documentation, then it takes significant mental work to work out "what" the module from "how" it does it, and still more work to go backwards from "what" the module does to "why" it does it - that is, in relying primarily on the code for design documentation, you are two degrees removed from understanding the design motivation.

Consider, instead, code with a useful test suite. While the "how does this work?" question is answered by the system code itself, the "what does this do?" question is answered by the test code. With a good test suite, you will see the set of "what" answers that the designer thought were relevant in building the set of code under test. You do not need to work backwards from "how", and you have removed a degree of uncertainty in trying to understand the "why" behind a set of code. And, if the test suite is run with sufficient frequency, then the documentation for these "what" answers (that is, the test code itself) is much less likely to drift away from the execution model of the system. Having these two views into the execution model of the system — two orthogonal views — should help maintainers more rapidly develop a deeper understanding of the system under maintenance.

Furthermore, on the designer's part, the discipline of maintaining documentation for not just "how" does a process work, but also "what does the process do" (in other words, having to maintain both the code and the tests) encourages more rigorous consideration of the system model than would otherwise be the case: if I can't figure out a reasonable way to represent "what does this do" in code (that is, if I can't write reasonable test code), then I take it as a hint that I should reconsider my modular breakdown. Remember Dijkstra's admonition:

As a slow-witted human being I have a very small head and I had better learn to live with it and to respect my limitations and give them full credit, rather than try to ignore them, for the latter vain effort will be punished by failure.

Because it takes more cognitive effort to maintain both the tests and the code than to maintain the code alone, I must consider simpler entities if I'm to keep both the code and the tests in my head at once. When this constraint is applied throughout a system, it encourages complexity reduction in every unit of the system — including the complexity of inter-unit interactions. Since complexity is being reduced while maintaining system function, it must be the accidental complexity of the system being taken out, so that a higher proportion of the remaining system complexity is essential to the problem domain. By previous argument, this implies that the unit-testing discipline encourages increased solution elegance.

This makes unit-testing discipline an example of what business-types call "synergy" - a win/win scenario. On the design side, following the discipline encourages a design composed of simpler, more orthogonal units. On the maintenance side, the existence of tests provide an orthogonal view of the design, making the design of individual units more comprehensible. This makes it easier for maintainers to develop a mental model of the system (going back to the "why" question that motivated the system in the first place), so that required maintenance will be less likely to result in model inconsistency. A more comprehensible, internally consistent, system is less risky to maintain than a less comprehensible, or internally inconsistent system would be. Unit testing encourages design elegance.

Sunday, July 27, 2014

Sources of Accidental Complexity

Recalling my preferred definition of solution elegance:

TotalComplexity = EssentialComplexity + AccidentalComplexity
Elegance = EssentialComplexity / TotalComplexity

So we want to reduce accidental complexity. Which requires identifying accidental complexity. This is subjective - the perception of complexity will vary by observer (I wrote about my experience here), and the same observer will perceive the same code as differently complex at different times. It will often be the case that two types of accidental complexity will be in opposition to each other: reducing accidental complexity of type A results in accidental complexity of type B, and reducing complexity of type B results in complexity of type A. So this will be a balancing act, and choosing the right balance means knowing your audience.

So far, so abstract. This will hopefully become clearer through example. If we accept that perceived complexity is proportional to the effort that must be spent to work with a system under study, then we can look at different types of effort we might spend, and then say when that type of effort can be considered "accidental" or "essential" to the system. This may become an ongoing series, so I'll start with two such types of effort that I'll call "lookup effort" and "interpretation effort".

Lookup effort

"Lookup effort" is the effort spent referring to external documents to understand the meaning of a piece of code. I use the term "document" broadly in this context: an external document may refer to traditional published documentation, or it may refer to source code located elsewhere in the system. The important point is that it is external: having to maintain the context of the code under study in your head while you look something up feels like an extra effort, which means it makes the system feel more complex.

This type of effort can feel "essential" to understanding the system as a whole. It will feel "essential" when the external reference neatly matches a domain boundary: in this case, the fact that the lookup is external reinforces that the domain includes a separation of concerns. The fact of having to traverse to an external reference can teach you something about the domain.

On the other hand, this type of effort will feel "accidental" in just about every other scenario:

  • when reading code, having to look up a library function, or language feature, that isn't well understood by the reader (as opposed to inlining the called construct, using mechanisms that the reader already understands);
  • when debugging code, building a highly detailed model of runtime behavior (which often requires moving rapidly through several layers of abstraction, to come to complete understanding);
  • when writing code, determining the calling conventions (function name, argument order and interpretation) of a function to be called.

In fact, I'd argue that the major force preventing this type of effort from becoming overwhelming is the fact that it disappears as our system vocabulary improves: as our language vocabulary improves, we don't need to refer to the dictionary as frequently. The time spent looking concepts up disappears when the concept is already understood. For example, if I am reasonably familiar with C, and read the code:

strncpy(CustomerName, Arg, MaxNameLength);

I do not need to refer to any reference documentation to understand that we are copying the string at Arg to the memory at CustomerName: I've already internalized what strncpy means and how it works, so that reading this line of code does not cause me to break my flow. On the other hand, at the time I wrote this question on stackoverflow, I was not prepared to understand mapM (enumFromTo 0). Since I understand strncpy very well already, it has no associated lookup effort. However, at the time I wrote that question, mapM (enumFromTo 0) had an extremely high lookup complexity, as it relied on familiarity with concepts I did not yet understand, and vocabulary I had not yet developed.

Interpretation effort

"Interpretation effort" is the effort that must be spent to develop an understanding of a linear block of code. There are several metrics that have been developed that can give a sense of the scale of interpretation effort (some of my favorites are McCabe's cyclomatic complexity, and variable live time (I couldn't easily find an on-line reference, but the metric is described in McConnell's Code Complete), but it will usually be true that a longer linear block of code will be more effort to interpret than a shorter block.

Linear blocks of code will ideally be dense with domain information, so that interpretation effort should feel essential to understanding the sub-domain. Since code blocks are the basic unit in which we read and write code, and since the problem domain should be dominant in our thinking as we read and write code, linear code blocks will naturally have a high density of information about the domain. Except when they don't. Which will happen by accident. I'm trying (failing?) to be cute, but to be more straightforward about it, what I mean is that the default state of a linear code block is to consist of problem essence. It is accident that pulls us out of this state.

But software is accident-prone. Among the sources of accidental complexity in linear code blocks are:

  • Repeated blocks of structurally identical code, AKA cut-and-paste code. This code must be re-interpreted each time it is encountered. If it were (properly) abstracted into a named external block, it would only need to be read and interpreted once, and given a name by which the common function can become part of the system vocabulary.
  • Inconsistent style. When software is maintained by multiple authors, it is unlikely that the authors will naturally have the same preferences regarding indentation style, symbol naming, or other aspects of style. To the extent that the written code does not look like the product of a single mind, there will be greater interpretation effort, as the reader must try to see the code through each maintainer's mind as she tries to understand the code in question.
  • Multiple levels of abstraction visible at once. See this.
  • Boilerplate code.
  • And much more!

Trade-offs

"Interpretation effort" forms a natural pair with "lookup effort": decreasing lookup effort (by inlining code) will naturally come at the expense of increasing interpretation effort, while decreasing interpretation effort (by relying on external code for part of the function's behavior) will tend to increase lookup effort. There are guidelines that will generally be useful in picking the right balance in your design. (Aiming for high cohesion in code is intended to reduce interpretation effort, aiming for low coupling is intended to reduce lookup effort, and aiming for high fan-in and low fan-out is intended to help minimize the required system vocabulary.) In general, I would bias towards having greater "lookup effort" than "interpretation effort" in a design, as lookup effort can be eliminated by improving system vocabulary while interpretation effort will always be present. This advice will apply in most situations, but will not necessarily apply in all. Internalizing not just the rules, but also the rationale for the rules, will make it possible for you to make the right decisions for your audiences.