Subtraction is not the same as comparison

I tracked down a bug the other day that really surprised me. Again, not because of the code itself, but rather that the person who wrote the code should have known better.

If you want to sort a list of a user-defined type in C# (and other languages), you need a comparison method. There are several different ways to do that, perhaps the simplest being to create a comparison delegate. For example, if you have this class definition:

    public class Foo
    {
        public int Id {get; private set;}
        public string Name {get; set;}
        // constructor and other stuff here . . .
    }

And a list of those:

    List<Foo> MyList = GetFooThings(...);

Now, imagine you want to sort the list in place by Id. You can write:
    MyList.Sort((x,y) => { return x.Id.CompareTo(y.Id); });

CompareTo will return a value that is:

  • Less than 0 if x.Id < y.Id
  • Equal to 0 if x.Id = y.Id
  • Greater than 0 if x.Id > y.Id

The code I ran into was similar, except the programmer thought he’d “optimize” the function. Rather than incurring the overhead of calling Int32.CompareTo, he took a shortcut:

    MyList.Sort((x,y)) => { return x.Id - y.Id; });

His thinking here is that the result of subtracting the second argument from the first will contain the correct value. And that’s true most of the time. It’s not at all surprising that his testing passed. After all, 12 - 3 returns a positive number, 300 - 123456 returns a negative number, and 99 - 99 returns 0.

But he forgot about negative numbers and integer overflow. Subtracting a negative number is like adding a positive number. That is, 4 - (-1) = 5. And in the artificial world of computers, there are limits to what we can express. The largest 32-bit signed integer we can express is Int32.MaxValue, or 2,147,483,647. If you add 1 to that number, the result is Int.MinValue, or -2,147,483,648. So the result returned by the comparison delegate above, when x is a very large number and y is a sufficiently small negative number is incorrect. It will report, for example, that 2,147,483,647 is smaller than -1!

There’s a reason that Int32.CompareTo is written the way it is. Here it is, directly from Microsoft’s Reference Source:

    public int CompareTo(int value) {
        // Need to use compare because subtraction will wrap
        // to positive for very large neg numbers, etc.
        if (m_value < value) return -1;
        if (m_value > value) return 1;
        return 0;
    }

It seems like I run into these fundamental errors more often now than I did in the past. I’m yet sure of the reason. I’ve been active in the online programming community for 30 years, and have seen a lot of code written by programmers of all skill levels. Early on, the average skill level of the programmers I met online was much higher than it is today. These days, every Java programming student has access to Stack Overflow, and I see a lot of these rookie mistakes. But I also see many such mistakes made by more experienced programmers.

I’m wondering if the difference is one of education. When I started programming, we learned assembly language early on and actually used it to write real programs. Two’s complement integer representation was a lesson we learned early, and integer overflow was something we dealt with on a daily basis. We would not have made this mistake. No, we made much more interesting mistakes.

This error can be caught, by the way. If you enable runtime overflow checking in C#, the runtime will throw an exception if the result of an arithmetic operation results in overflow or underflow. But that check is almost always turned off because it can have a large negative impact on performance, and in many places we depend on integer overflow for our algorithms to work correctly. It’s possible to disable overflow checking for a specific expression or block of code, but in my experience that functionality is rarely used. You can also disable the check globally but enable it for specific expressions or blocks of code, but doing so requires that you understand the issue. And if a programmer understood the issue, he wouldn’t have written the erroneous code in the first place.

Errors like this can exist in a working program for years, and then crop up at the most inopportune time when somebody uses the code in a new way or presents data that the original programmer didn’t envision. As much as our languages and tools have evolved over the past 30 years, in some ways we’re still distressingly close to the metal.

Pairing heap representation

Up to this point, I’ve been showing paring heap nodes as having a child list. Conceptually, the node structure looks like this:

    class HeapNode
        public int Data;
        public HeapNode[] Children;
    }

That’s a good conceptual model, but implementing that model can be unwieldy, and can consume a huge amount of memory. When working with small objects in managed languages like .NET, memory allocation overhead can easily exceed the memory used for data. That’s especially true of arrays, which have a 56-byte allocation overhead.

Granted, not all nodes in a pairing heap have children. In fact, at most half of them will. So we can save memory by not allocating an array for the children if there aren’t any. But that adds some complexity to the code, and at best saves only half of the allocation overhead.

Using the .NET List<T> collection doesn’t help, because List<T> uses an array as the backing store. LinkedList<T> will work, but involves its own overhead what with having to manage LinkedListNode instances.

In short, managing a per-node list of children can be difficult.

In my introduction to the Pairing heap, I showed this figure:

    2
    |
    8, 7, 3
    |     |
    9     4, 5
          |
          6

2 is the root of the tree. It has child nodes 8, 7, and 3. 9 is a child of 8. 4 and 5 are children of node 3, and 6 is a child of 4.

More traditionally, that tree would be drawn as:

                   2
                 / | \
                8  7  3
               /     / \
              9     4   5
                   /
                  6

That’s the traditional view of the tree. But we can also say that 8 is the first child of 2, 7 is the sibling of 8, and 3 is the sibling of 7. That is, every node has a reference to its first child, and to its next sibling. The node structure, then, is:

    class HeapNode
        public int Data;
        public HeapNode FirstChild;
        public HeapNode Sibling;
    }

As it turns out, there’s a well known binary tree structure called Left-child-right-sibling. Any traditional tree structure can be represented as such a binary tree. Our tree above, when represented as a left-child-right-sibling binary tree, becomes:

                 2
                /
               8
              / \
             9   7
                  \
                   3
                  /
                 4
                / \
               6   5

You might notice that this structure bears striking similarity to the original figure from my introduction. It’s the same thing, only rotated 45 degrees clockwise.

As you can see, this builds a very unbalanced binary tree. That’s okay, since we’re not searching it. With pairing heap, all of the action is at the first few tree levels of the tree. A deep tree is good, because it means that we rarely have many siblings to examine when deleting the smallest node.

Implementing a Pairing heap in a left-child-right-sibling binary tree is incredibly easy. Next time I’ll show a simple implementation in C#.

But that’s the way we’ve always done it!

It’s surprising how many Computer Science articles and textbooks, when showing how to implement a binary heap in an array, present code that has the root node at index 1 in C-like languages where the first element of an array is at index 0. In those implementations, element 0 in the array is unused.

This bothers me, not because the code wastes an insignificant amount of memory, but because it leads to a lot of confusion among students and junior programmers who are trying to implement a binary heap. Off by one errors abound, the first being in allocating the heap array. Here’s a common error that occurs when allocating a heap to hold 100 integers.

    int[] heap = new int[100];

We’re conditioned from the very start of our programming education to begin counting at 0. Every time we have stuff in an array or list, the first thing is at index 0. Every array-based algorithm we work with reinforces this lesson. We’re taught that some languages used to start at 1, but those heretical languages have nearly been eliminated from the world. And then they foist this folly on us: a heap’s root is at index 1.

We’re taught that 100 items means indexes 0 through 99. It’s beat into our heads on a daily basis when we’re learning programming. Then they trot out this one exception, where we have to allocate an extra item and count from 1 to 100 rather than 0 to 99 like normal programmers do.

Some people say, “but if you start at 0, then the calculations to find the children won’t work.” Well, they’re partially right. If the root is at index 1, then the left child of the node at index x is at index (x * 2), and the right child is at index (x * 2) + 1. The parent of the node at index x is at index x/2. They’re right in that those calculations don’t work if you move the root to index 0. But the changes required to make the calculations work are trivial.

If the root is at index 0, then the left child is at (2 * x) + 1, right child at (2 * x) + 2, and parent at (x-1)/2.

The hard core optimizer will tell you that the extra addition in computing the left child, and the extra subtraction when computing the parent will incur a big performance hit. In truth, in the context of a real, working, computer program, the performance difference is down in the noise. It’s highly unlikely that your program’s overall performance will suffer from the addition of a couple of ADD or SUB instructions in a binary heap implementation. If it does, then you’re doing something horribly wrong. Doing something stupid in order to save a few nanoseconds here and there is … well … stupid.

No, I think the real reason we continue this madness is historical. The first known reference to a binary heap is in J.W.J. Williams’ implementation of Heapsort (Williams, J.W.J. (1964), ‘Algorithm 232: Heapsort’, Communications of the ACM 7 (6), 347-348). Yes, that’s right: 55 years ago. His code sample, like all ACM code samples at the time, was written in Algol, a language in which array indexes start at 1.

Textbooks with code samples in Algol and, later, Pascal, perpetuated the idea that the root of a binary heap must be at index 1. It made sense, what with 1 being the first index in the array. When those books were revised in the 1990s and the examples presented in C, Java, C++, etc., a literal conversion of the code maintained the 1-based root even though arrays in those languages are 0-based. Nobody stopped to consider how confusing that empty first element can be to the uninitiated.

What I find disturbing is that whoever did the code conversions almost certainly ran into an off by one problem when testing the examples. But rather than spend time to rewrite the code, they “fixed” the problem by allocating an extra item, and maybe explained it away in the text as something that just has to be done to keep the root at index 1. Those few who actually understood the issue seem to have been too lazy to make the correction, opting instead to explain it away as a performance issue.

In so doing, they’ve done their audiences a grave disservice. I find that inexcusable.

An interesting interview question

I ran across a problem today that I think would make an excellent interview question.

You work in a warehouse that has N storage compartments, labeled D-1 through D-N. Each storage compartment can hold one shipping crate. There’s also a temporary holding area, called D-T, that can hold a single crate. There also are N crates, labeled C-1 through C-N, each of which is in a storage compartment.

There is a forklift that can pick up a single crate, move it to an empty position, and put it down.

The owner, after touring the facility, decreed that the crates must be rearranged so that crate C-x is in space D-x (C-1 in D-1, C-2 in D-2, … C-N in D-N).

What algorithm would you employ to rearrange the crates in the minimum number of moves?

This is a fairly simple problem. If you pose it as a “balls and bins” problem, with a physical model to play with, most people will develop the optimum algorithm in a matter of minutes. But pose it as a programming problem and a lot of programmers have trouble coming up with the same algorithm.

If you’re interested in trying the problem yourself, cut five squares (crates) from a piece of paper, and label them C1 through C5. Draw six boxes on another sheet of paper, and label them D1 through D6. Now, arrange the crates on the spaces in this order: C4, C1, C5, C3, C2. Leave D6 empty. Now, rearrange them. Remember, if you pick up a card you have to place it in an empty spot before you can pick up another card.

So if this problem is so simple, why is it such an interesting interview question and why do programmers have trouble with it?

One reason is that it’s not really a sorting problem. At least not in the traditional sense. You see, I’m not asking the candidate to write a sort. In fact, I’m not asking him to write a program at all. I’m asking him to develop an algorithm that will rearrange the crates in a specific order, given the constraints. The candidate’s first reaction tells me a lot about how he approaches a problem. Granted, I have to take into account that the candidate will expect a whiteboard programming problem, but if his immediate reaction is to stand up and start writing some kind of sort method, I can pretty much guarantee that he’s going down the wrong track.

The algorithm that uses the fewest moves is not one that you would typically write. It’s either computationally expensive, or it uses additional extra memory. And there’s a well known efficient solution to sorting sequential items. That algorithm works well in the computer, but is less than optimum when translated to the physical world where it takes time for a forklift to pick up an item and move it.

The well known algorithm starts at the first storage compartment, D-1. If the item in that position is not crate C-1, it swaps the crate with the crate in the position that it needs to be in. It continues making swaps until C-1 is in D-1, and then moves on to compartment D-2 and does the same thing. So, for example, if you have five crates that are arranged in the order [4,1,5,3,2,x] (the x is for the empty spot, initially D-T) the sequence of swaps is:

    Swap 4 with 3 (the item that's in position D-4), giving [3,1,5,4,2,x]
    Swap 3 with 5, giving [5,1,3,4,2,x]
    Swap 5 with 2, giving [2,1,3,4,5,x]
    Swap 2 with 1, giving [1,2,3,4,5,x]

At which point the programmer says, “Done!”

The programmer who develops this solution and calls it done might think again if he re-reads the problem statement. Note that nowhere did the algorithm use the temporary location, D-T. Either the programmer will smell a rat, or he’ll dismiss that temporary location as a red herring.

As it turns out, it’s not a red herring. A swap is actually a sequence of three separate operations. Swapping the contents of D-1 and D-4, for example, results in:

  1. Move the contents of D-1 to a temporary location.
  2. Move the contents of D-4 to D-1.
  3. Move the contents of the temporary location to D-1.

The well known solution to the problem–the solution that is in many ways optimum on the computer–results in 12 individual moves:

  1. Move crate C-4 from D-1 to D-T.
  2. Move crate C-3 from D-4 to D-1.
  3. Move crate C-4 from D-T to D-4. (Crate C-4 is in position.)
  4. Move crate C-3 from D-1 to D-T.
  5. Move crate C-5 from D-3 to D-1.
  6. Move crate C-3 from D-T to D-3. (Crate C-3 is in position.)
  7. Move crate C-5 from D-1 to D-T.
  8. Move crate C-2 from D-5 to D-1.
  9. Move crate C-5 from D-T to D-5. (Crate C-5 is in position.)
  10. Move crate C-2 from D-1 to D-T.
  11. Move crate C-1 from D-2 to D-1. (Crate C-1 is in position.)
  12. Move crate C-2 from D-T to D-2. (Crate C-2 is in position.)

In the worst case, of which this is an example, the algorithm makes N-1 swaps, or 3N-3 moves.

At this point, it might be useful to have a discussion about what problem we’re really trying to solve. The problem asked for the minimum number of moves. A reminder that we’re talking about a physical process might be in order. To programmers who are used to things happening almost instantly, a few extra moves here and there don’t really make a difference. The solution presented above asymptotically optimum in that the number of moves required is linearly proportional to the number of crates. That’s typically a good enough answer for a programming question. But, again, this isn’t really a programming question. One can assume that the client wants to make the minimum number of moves because he has to pay a forklift operator $15 per hour and it takes an average of five minutes for every move. So if he can reduce the number of moves from 12 to 6, he saves $30.

The solution that most people presented with the balls and bins problem develop does things a bit differently. Rather than starting by moving crate C-4 to its rightful place, the algorithm starts by emptying the first bin (i.e. moving C-4 to D-T), and then filling the empty location with the item that belongs there. Then, it fills the new empty slot with the element that belongs there, etc. Let’s see how it works, again starting with [4,1,5,3,2,x].

  1. Move crate C-4 from D-1 to D-T.
  2. Move crate C-1 from D-2 to D-1. (Crate C-1 is in position.)
  3. Move crate C-2 from D-5 to D-2. (Crate C-2 is in position.)
  4. Move crate C-5 from D-3 to D-5. (Crate C-5 is in position.)
  5. Move crate C-3 from D-4 to D-3. (Crate C-3 is in position.)
  6. Move crate C-4 from D-T to D-4. (crate C-4 is in position.)

That took six moves, or half of what the “optimum” solution took. That’s not quite a fair comparison, though, because that’s not the worst case for this algorithm. The worst case occurs when moves result in a cycle that leaves D-T empty before all of the items are in order. Consider the sequence [2,1,5,3,4,x]. Using our new algorithm, we start with:

  1. Move crate C-2 from D-1 to D-T.
  2. Move crate C-1 from D-2 to D-1. (Crate C-1 is in position.)
  3. Move crate C-2 from D-T to D-2. (Crate C-2 is in position.)
    At this point, our temporary spot is empty, so we need to make a new empty spot by moving the first out of place crate to D-T.
  4. Move crate C-5 from D-3 to D-T.
  5. Move crate C-3 from D-4 to D-3. (Crate C-3 is in position.)
  6. Move crate C-4 from D-5 to D-4. (Crate C-4 is in position.)
  7. Move crate C-5 from D-T to D-5. (Crate C-5 is in position.)

It’s interesting to note that the swapping algorithm requires three swaps (nine moves) to rearrange items given this initial arrangement.

As far as I’ve been able to determine, the worst case for this algorithm requires 3N/2 moves. It will never make more moves than the swapping algorithm, and often makes fewer.

The point of posing the problem to the candidate is to get him to develop an approach to the problem before he starts coding.

Another interesting thing about this question is that it allows for a follow-on question. Assuming that the candidate develops the optimum algorithm, ask him to write code that, given the starting position, outputs the moves required to rearrange the crates. In other words, implement the algorithm in code.

At this point, he’ll have to make a decision: use sequential search to locate specific crates, or create an index of some kind to hold the original layout so he can quickly look up an individual crate’s location. Again, he should be asking questions because the requirements are deliberately ambiguous. In particular, he might think that he can’t use the index because we only allowed him one extra swapping space for the crates. But again, the original problem statement doesn’t mention a computer program, and the follow-on question doesn’t place any restrictions on the memory consumption.

I’ll leave implementation of the algorithm as an exercise for those who are interested. It’s not difficult, just a bit odd for those who are used to writing traditional sorting algorithms.

Ultimately, the reason I find this problem a potentially useful interview question is because it forces the candidate to differentiate between the customer’s requirements (the optimum sequence of steps required to arrange his crates), and then internal implementation details. The extent to which the candidate can make that differentiation tells me a lot about how she will approach the real-world problems that we’ll present to her on a daily basis. In addition, the simple problem is rife with opportunities to have other discussions about approaches to problems and common coding tradeoffs.

It occurred to me as I was writing this that I should construct a balls and bins prop that I can trot out if the candidate doesn’t trip to the algorithm relatively quickly. It would be interesting to see how quickly the candidate, if given the prop, would discover the optimum algorithm.

I’m kind of excited about getting an opportunity to try this out.

Null is evil, Chapter 9346

If we’ve learned anything from 45 years of C coding, it’s that NULL is evil. At least relying on NULL return values is usually evil, especially when it forces you to write contorted special case handling code. I ran across a case in point today that surprised me because the programmer who wrote it should have known better. And for sure the senior programmer who reviewed the code should have flagged it for rewriting.

There is a method that takes an existing object and updates it based on fields in another object. The code writes a change log entry for each field updated. The code is similar to this:


    private bool UpdatePerson(Person person, Models.Person personUpdate)
    {
        var changes = new List<ChangeLogEntry>();
        
        changes.Add(CompareValues(person.FirstName, personUpdate.FirstName, "First Name"));
        changes.Add(CompareValues(person.LastName, personUpdate.LastName, "Last Name"));
        // ... other fields here
        
        changes = changes.Where(x => x != null).ToList();  // 

This is Just Wrong. It works, but it’s wonky. The code is adding null values to a list, knowing that later they’ll have to be removed. Wouldn’t it make more sense not to insert the null values in the first place?

One way to fix the code would be to write a bunch of conditional statements:


    ChangeLogEntry temp;
    temp = CompareValues(person.FirstName, personUpdate.FirstName, "First Name");
    if (temp != null) changes.Add(temp);
    
    temp = CompareValues(person.LastName, personUpdate.LastName, "Last Name");
    if (temp != null) changes.Add(temp);
    
    // etc., etc.

That’s kind of ugly, but at least it doesn’t write null values to the list.

The real problem is the CompareValues function:


    private ChangeLogEntry CompareValues(string oldValue, string newValue, string columnName)
    {
        if (oldValue != newValue)
        {
            var change = new ChangeLogEntry(columnName, oldValue, newValue);

            return change;
        }

        return null;
    }

Yes, it’s poorly named. The method compares the values and, if the values are different, returns a ChangeLogEntry instance. If the values are identical, it returns null. Perhaps that’s the worst thing of all: returning null to indicate that the items are the same. This model of returning null to indicate status is a really bad idea. It forces clients to do wonky things. Of all the horrible practices enshrined in 40+ years of C-style coding, this one is probably the worst. I’ve seen and had to fix (and, yes, written) too much terrible old-style C code to ever advocate this design pattern. null is evil. Avoid it when you can.

And you usually can. In this case it’s easy enough. First, let’s eliminate that CompareValues function:


    if (person.FirstName != personUpdate.FirstName) changes.Add(new ChangeLogEntry(person.FirstName, personUpdate.FirstName, "First Name"));
    if (person.LastName != personUpdate.LastName) changes.Add(new ChangeLogEntry(person.LastName, personUpdate.LastName, "Last Name"));

That’s still kind of ugly, but it eliminates the null values altogether. And we can create an anonymous method that does the comparison and update, thereby removing duplicated code. The result is:


    private void UpdatePerson(Person person, Models.Person personUpdate)
    {
        var changes = new List<ChangeLogEntry>();
        
        // Anonymous method logs changes.
        var logIfChanged = new Action<string, string, string>((oldValue, newValue, columnName) =>
        {
            if (oldValue != newValue)
            {
                changes.Add(new ChangeLogEntry(columnName, oldValue, newValue));
            }
        }
        
        logIfChanged(person.FirstName, personUpdate.FirstName, "First Name");
        logIfChanged(person.LastName, personUpdate.LastName, "Last Name");
        // ... other fields here

        if (changes.Any())
        {
            // update record and write change log
        }
    }

That looks a lot cleaner to me because there are no null values anywhere, and no need to clean trash out of the list.

Here’s the kicker: I didn’t rewrite the code. If the code were as simple as what I presented here, I probably would have changed it. But the code is a bit more complex and it’s currently running in a production environment. I hesitate to change working code, even when its ugliness causes me great mental pain. I’m almost certain that I can make the required changes without negatively affecting the system, but I’ve been burned by that “almost” too many times with this code base. If I had a unit test for this code, I’d change it and depend on the test to tell me if I screwed up. But there’s no unit test for this code because … well, it doesn’t matter why. Let’s just say that this project I inherited has a number of “issues.” The one I’ll relate next time will boggle your mind.

Efficiency of Pairing heap

Last time, I introduced the Pairing heap, and showed an example of its structure and how it changes as items are added and removed. At first glance it seems unlikely that this can be faster than the binary heap I spent so much time exploring three years ago. But it can be. Let me show you how.

As I mentioned in the discussion of the binary heap, inserting an item takes time proportional to the base-2 logarithm of the heap size. If there are 100 items in the heap, inserting a new item will take, worst case, O(log2(100)) operations. Or, about 7 swaps. It could be fewer, but it won’t be more. In addition, removing the minimum item in a min-heap will require O(log2(n)) operations. In a binary heap, insertion and removal are O(log n).

As you saw in my introduction to the Pairing heap, inserting an item is an O(1) operation. It never involves more than a comparison and adding an item to a list. Regardless of how many items are already in the heap, adding a new item will take the same amount of time.

Removal, though, is a different story altogether. Removing the smallest item from a Pairing heap can take a very long time, or almost no time at all.

Consider the Pairing heap from the previous example, after we’ve inserted all 10 items:

    0
    |
    6,4,3,5,8,9,7,2,1

It should be obvious that removing the smallest item (0) and re-adjusting the heap will take O(n) operations. After all, we have to look at every item during the pair-and-combine pass, before ending up with:

    1
    |
    2, 8, 3, 4
    |  |  |  |
    7  9  5  6

But we only look at n/2 items the next time we remove the smallest item. That is, we only have to look at 2, 8, 3, and 4 during the pair-and-combine pass. The number of items we look at during successive removals is cut in half with each removal, until we get to two or three items per removal. Things fluctuate a bit, and it’s interesting to write code that displays the heap’s structure after every removal.

By analyzing the mix of operations required in a very large number of different scenarios, researchers have determined that the amortized complexity of removal in a Pairing heap is O(log n). So, asymptotically, removal from a Pairing heap is the same complexity as removal from a binary heap.

It’s rather difficult to get a thorough analysis of the Pairing heap’s performance characteristics, and such a thing is way beyond the scope of this article. If you’re interested, I suggest you start with Towards a Final Analysis of Pairing Heaps.

All other things being equal, Pairing heap should be faster than binary heap, simply because Pairing heap is O(1) on insert, and binary heap is O(log n) on insert. Both are O(log n) on removal. Keep in mind, though, that an asymptotically faster algorithm isn’t always faster in the real world. I made that point some time back in my post, When theory meets practice. On average, Pairing heap does fewer operations than does binary heap when inserting an item, but Pairing heap’s individual operations are somewhat more complex than those performed by binary heap.

You also have to take into account that Pairing heap can do some things that binary heap can’t do, or can’t do well. For example, the merge (or meld) function in Pairing heap is an O(1) operation. It’s simply a matter of inserting the root node of one heap into another. For example, consider these two pairing heaps:

    1                  0
    |                  |
    2, 8               5
    |
    4

To merge the two heaps, we treat the root nodes just as we would two siblings after removing the smallest item. We pair them to create:

    0
    |
    1, 5
    |
    2, 8
    |
    4

With binary heap, we’d have to allocate a new array that’s the size of both heaps combined, copy all items from both heaps to it, and then call the Heapify method to rebuild the heap. That takes time proportional to the combined size of both heaps.

And, as I mentioned previously, although changing the priority of an item in a binary heap isn’t expensive (it’s an O(log n) operation), finding the node to change is an O(n) operation. With Pairing heap, it’s possible to maintain a node reference, and it’s been shown (see the article linked above) that changing the priority of a node in a Pairing heap is O(log log n).

Another difference between Pairing heap and binary heap is the way they consume memory. A binary heap is typically implemented in a single contiguous array. So if you want a heap with two billion integers in it, you have to allocate a single array that’s 8 gigabytes in size. In a Pairing heap, each node is an individual allocation, and each node contains two pointers (object references). A Pairing heap of n nodes will occupy more total memory than a binary heap of the same size, but the memory doesn’t have to be contiguous. As a result, you might be able to create a larger pairing heap than you can a binary heap. In .NET, for example, you can’t create an array that has more than 2^31 (two billion and change) entries. A Pairing heap’s size, however, is limited only by the available memory.

The Pairing Heap, Part 1

A key takeaway from my early discussion of heaps is that if all we need is the ability to get the smallest item whenever we reach into the bag, then there’s no need to keep the entire bag in sorted order if some less expensive alternate ordering will work. Because of that relaxed requirement, we can implement a priority queue with a heap a whole lot more efficiently than with a sorted list.

I like to call it creative laziness.

Binary heaps are useful and quite efficient for many applications. For a pure priority queue, it’s hard to beat the combination of simplicity and efficiency of a binary heap. It’s easy to understand, easy to implement, and performs quite well in most situations. But the simplicity of the binary heap makes it somewhat inflexible, and also can prove to be inefficient when heaps become very large.

Even though the binary heap isn’t sorted, its ordering rules are pretty strict. If you recall, a valid binary heap must satisfy two properties:

  • The heap property: the key stored in a node is greater than or equal to the key stored in the parent node.
  • The shape property: it’s a complete binary tree.

Maintaining the shape property is required if you want to efficiently implement the binary heap in a list or array, because it lets you implicitly determine the locations of the child or parent nodes. But nothing says that you have to implement a binary heap in an array. You could use a more traditional tree representation, with explicit nodes that have right, left, and parent node references. That has some real benefits.

For example, a common extension to the priority queue is the ability to change the priority of an item in the queue. It’s possible to do that with a binary heap, and in fact isn’t difficult. It’s inefficient, though, because in a binary heap it takes a sequential search to find the item before you can change its priority.

But if you have a traditional tree representation, then you can maintain direct references to the nodes in the heap. At that point, there’s no longer a need for the sequential search to find the node, and changing an item’s priority becomes a pure O(log n) operation. Algorithms such as the A* search algorithm typically use something other than a binary heap for their priority queues, specifically because changing an item’s priority is a common operation.

Another common heap operation is merging (also known as melding): combining two heaps into one. If you have two binary heaps, merging them takes time proportional to the combined size of both heaps. That is, it takes O(n + m) time. The basic idea is to allocate an array that’s the size of both heaps, put all the items into it, and then call a BuildHeap function to construct the heap. With a different heap representation, you can do the merge in constant time.

It turns out that maintaining the shape property when you go to a more traditional tree representation is needlessly expensive. If you eliminate the need for the shape property, it becomes possible to indulge in even more creative laziness.

And, my oh my, have people been creative. Here’s a partial list of heap implementations.

There’s a lot of discussion about which of those is the “fastest” or “best” heap implementation. A lot of it involves theoretically optimum asymptotic behavior, and can get pretty confusing for those of us who just want a working heap data structure that performs well in our applications. Without going into too much detail, I’ll just say that some of those heap implementations that have theoretically optimum performance in some areas (the Brodal queue, for example) are difficult to implement and not at all efficient in real-world situations.

Of those that I’ve worked with, I particularly like Pairing heap because of its simplicity.

The pairing heap’s shape is much different than that of a binary heap. Rather than having right and left child references, a pairing heap node maintains child lists. It maintains the heap property in that any child in a min-heap is greater than or equal to its parent, but the concept of shape is thrown out the window. For example, consider this binary heap of the numbers from 0 to 9.


                    0
              1           3
          5       2   7       4
        9   6   8

Again, in the binary heap, each node has up to two children, and the children are larger than the parents. Although there are multiple valid arrangements of values possible for a binary heap, they all maintain that shape property.

A Pairing heap’s structure is much more flexible. The smallest node is always the root, and children are always larger than their parents, but that’s where the similarity ends. The best way to describe the structure is to go through an exercise of adding and removing items.

If you start with an empty heap and add an item, then that item becomes the root. So, let’s add 6 as the first item in the heap:

    6

Now, we add the value 3. The rules for adding are:

  1. If there is no root, then the new item becomes the root.
  2. If the new item is greater than or equal to the root, then the new item is added to the root item’s child list.
  3. If the new item is smaller than the root, then the new item becomes the root, the old root’s child list is appended to the new root’s child list, and the old root is appended to that child list.

In this case, 3 is smaller than 6, so 3 becomes the root and 6 is added as a child of 3.

    3
    |
    6

Here, the vertical bar character (|) is used to denote a parent-child relationship. So node 6 is a child of node 3.

Now we add 4 to the heap. Since 4 is larger than 3, 4 is added to the child list.

    3
    |
    6,4

When we add 2 to the heap, 2 becomes the new root, and 3 is appended to the child list:

    2
    |
    6,4,3

I’ll add 5, 8, 9, and 7 in that order, giving:

    2
    |
    6,4,3,5,8,9,7

Adding 0 makes 0 the root, and then adding 1 appends to the child list:

    0
    |
    6,4,3,5,8,9,7,2,1

Note that the heap property is maintained. The smallest item is at the root, and child items and their siblings are larger than their parents. Because the smallest item is at the root, we can still execute the get-minimum function in constant time.

And in fact, insert is also a constant time operation. Inserting an item consists of a single comparison, and appending an item to a list.

Things get interesting when we want to remove the smallest item. Removing the smallest item is no trouble at all, of course. It’s figuring out the new root that gets interesting. This proceeds in two steps:

First, process the children in pairs from left to right, creating 2-node heaps with the lowest item at the root, and the other item as a child. So using the child list above, we would create:

    4 3 8 2 1
    | | | |
    6 5 9 7

So 6 is a child of 4, 5 is a child of 3, etc. In this case, 1 doesn’t have a child.

Then, we process from right to left, taking the lower item as the root and adding the other item as a child of the root. Starting with 1 and 2, we combine them to create:

    4 3 8 1
    | | | |
    6 5 9 2
          |
          7

Proceeding to compare 1 and 8:

    4 3 1
    | | |
    6 5 2, 8
        |  |
        7  9

Note here that 2 and 8 are siblings, but 7 is a child of 2, and 9 is a child of 8. 9 and 7 are not siblings.

When we’re done processing, we have:

    1
    |
    2, 8, 3, 4
    |  |  |  |
    7  9  5  6

Again, the heap property is maintained.

It’s very important to understand that the heap property is maintained. All child nodes are larger than their parent nodes. Siblings (children of the same node) are represented in an unordered list, which is just fine as long as each sibling is larger than the parent node. Also, a node’s children only have to be larger than their parent node. Being smaller than a parent’s sibling does not violate the heap property. In the figure above, for example, 5 is larger than its parent, 3, but smaller than its parent’s sibling, 8.

Removing 1 from the heap leaves us with

    2, 8, 3, 4
    |  |  |  |
    7  9  5  6

Again, we pair from left to right. 2 is smaller than 8, so 2 becomes the root of the subtree. But what happens to the children? They’re appended to the child list of the other node. So pairing 2 and 8 results in:

    2
    |
    8, 7
    |
    9

And pairing the others gives:

    2      3
    |      |
    8, 7   4, 5
    |      |
    9      6

And combining from right to left makes 2 the root of the new tree:

    2
    |
    8, 7, 3
    |     |
    9     4, 5
          |
          6

Again, you can see that the heap property is maintained.

Now, when we remove 2 from the heap, we first pair 8 and 7 to create:

    7     3
    |     |
    8     4, 5
    |     |
    9     6

And then combine to form:

    3
    |
    4, 5, 7
    |     |
    6     8
          |
          9

When 3 was selected as the smallest item, node 7 was appended to the child list.

That’s the basics of the Pairing heap. You might be asking yourself how that can possibly be faster than a binary heap. That will be the topic for next time.

Bad Boolean interfaces

Today I got a bug report saying that an error occurred while trying to save a record. That there was an error in this code I inherited isn’t unusual. It was odd, though, to encounter an error in saving a record. That particular bit of code executes hundreds of times per day and I’d never seen a bug report. A few minutes looking at the logs led me to this snippet:

    var outlets = outletRepository.GetOutlets();
    var outletName = outlets.First(outlet => outlet.Id == myRecord.OutletId).Name;

For those of you who don’t speak C#, all that’s doing is getting the list of Outlet records from the database and locating the one that’s referenced by myRecord. We do it this way rather than just looking up the single outlet because we use the returned list of outlets later in the method.

At first I couldn’t figure out why that failed. After all, our database has referential integrity, so myRecord can’t possibly (as far as I know) reference an outlet that doesn’t exist.

It turns out that GetOutlets is defined as:

    public List<Outlet> GetOutlets(bool activeOnly = true)

So a call to GetOutlets() by default returns just the active outlets. Because the outlet referenced by this particular record was marked as inactive, that outlet was not returned by GetOutlets, and as a result the call to Enumerable.First threw an exception.

The most glaring error here is the use of First, which I contend should be used only if you know that what you’re looking for is somewhere in the list. Otherwise you probably should call FirstOrDefault and then check for a null return value.

A more insidious error is the design of the GetOutlets method. In the first place, using a Boolean value to determine whether the method returns all outlets or just the active outlets is a bad idea. true or false are rather ambiguous here. You can’t tell by looking at the value passed what attribute is being tested, or if the flag is used for something else entirely. Maybe true means “sort the records.” Programmers are somehow supposed to “know” what it means. In any case, if the parameter is required the programmer knows that he has to pass true or false in order for it to work. And one would assume that an unfamiliar programmer would check the parameter name and pass the appropriate value. But when you give the parameter a default value, you’ve effectively removed any roadblock to using the method incorrectly. Certainly the statement outlets = outletRepository.GetOutlets(); doesn’t convey any information about the parameter–or that it even exists.

Don’t use Boolean values like this. If you want two forms of the GetOutlets method, then write them.

    public List<outlet> GetAllOutlets();
    public List<outlet> GetOnlyActiveOutlets();

Another option, less preferred but possibly acceptable, would be to define an enumeration that explicitly states what you’re looking for:

    enum OutletFilterType
    {
        All,
        OnlyActive
    }

    public List <outlet> GetOutlets(OutletFilterType filter);

And to call it:

    outlets = GetOutlets(OutletFilterType.All);
    outlets = GetOutlets(OutletFilterType.OnlyActive);

But do not give that parameter a default value.

If you really must take shortcuts in your private implementation–a practice I strongly discourage, by the way, because private code has a distressing tendency to become public–then fine. Make a mess in your own house if you wish, but keep your public interfaces clean and unambiguous.

Every time I think I’m getting a handle on this code, I uncover another little code bomb. The code is littered with this GetXXX(true/false) pattern, and the use of First to get things from untrusted lists. I suppose you could call it job security, but I’m getting tired of it.

Delete the UPSERT

Many programmers who write code to work with databases like the concept of an “upsert” command: passing a record to the database and letting it figure out whether the record should be added or updated. If the record already exists in the database then it’s updated. Otherwise a new record is created. It’s very convenient.

It’s also a terrible idea. At least, that’s my experience. Many years ago I argued strongly for an “upsert” command, and on one project I created my data access layer with such a thing. We weren’t using stored procedures on that project, so the “check and update if there” logic was written in code. It worked. It also caused no end of problems, and I swore that I’d never use the upchuck (err … upsert) model again. And I didn’t, until I inherited this project.

The project I’m working on uses the upsert model throughout. The undocumented but well-understood convention is that none of our database tables have a record id of 0. So if I pass a record with id 0 to an upsert stored procedure, the record won’t be found in the table and a new record is inserted. This is convenient for the guy who writes the database access code because he only has to write one method to handle both insert and update. When you’re writing CRUD code for tables that have dozens of fields, that seems like a real savings: you write a whole lot less code.

But it comes at a cost that many don’t recognize: clients who call the upsert method have to know that record id 0 has a special meaning. That’s fine for the original development team, but it’s not at all obvious to a programmer who comes on to the project some time after the program is in production.

It gets worse. In our project we have a Categories table that contains category names and other information. Each Reward that we create belongs to exactly one category, the identifier of which is stored in the Rewards record. There are UI screens that allow system administrators to add new categories. The project is pretty mature now, so we don’t often add new categories.

Unfortunately, whoever wrote those UI screens wasn’t too hip on things. To create the drop-down list of categories, one would typically get all of the categories from the database and create a combo box (drop down) that lets the user select a category from the list. Sometimes the dropdown will have an initial value something like “Select a category,” with a default value. Usually, that label is added in code after getting the list of categories from the database. But not in this case. No, the genius who designed these screens decided that he’d create a category with an id of 0, and text “Select a category.” And he decided that his “no record” id would be -1.

So for Categories and a few other configuration tables, you pass -1 for the record id if you want to insert a new record. Passing 0 just overwrites the existing “Select a category” record.

So now the rule is “Passing a record id of 0 inserts a new record. Except for tables Foo, Bar, and Idiot, for which you must pass a record id of -1.” (Actually, any negative number will do, but I digress.)

Yeah. That’s like totally obvious. Not!

So when we did some refactoring in that area, one of my guys, relatively new to the project, made the reasonable assumption that 0 meant “new record” for this table just like it does for every other table. Imagine our surprise when QA told us that adding a new category just overwrote the record with ID 0.

I’ll grant that the programmer who introduced this bug is at fault for not noticing the difference. I put more blame the programmer who thought it’d be a good idea to special case these few tables for making the mistake not only possible but almost guaranteed to happen. But ultimately the blame goes to whoever decided that the Upsert model was a good idea. If we had separate Insert and Update methods and stored procedures, then there would be no question what was happening. With a separate Insert method, you’d never pass a record id, which would make it impossible to inadvertently do an Update when you meant to do an Insert.

One argument programmers make for the use of Upsert is that, with it, the create new record and edit existing record functions can be identical: initialize the record (either by reading an existing one from the database, or creating a new one with default values), and call the Edit method. When editing is done, just call Upsert and things “just work.” I’ve written code like that. I’ve also been bitten by it because that common Edit method always ends up needing a flag to indicate whether you’re creating new or editing existing, and the method is peppered with if (IsEdit) conditionals. In the end, the “common” Edit method is hugely complex and very fragile. It’s been my experience that the temptation to combine create and edit functionality is very strong, but giving in to it is the first step on a long and painful road.

Upsert is evil. It might be useful on the database end, but it has no use in a data access layer or anywhere else a mere mortal programmer  might be. The programmer should know whether his code is editing an existing or creating a new record, and his code should call the Update or Insert method, as appropriate.

 

Colorado flag specifications

Yes, I know. It’s only been … six months or so since I posted here. I had some trouble with my ISP, and I got busy with other things. Perhaps I’ll write about some of that.

I found a buyer for my Texas flag coffee table. At least, he said he was coming over with a check. I won’t say it’s sold until I get the cash, but I’m pretty confident that I’ve made the sale.

txTable

The buyer wanted me to add some bracing to the legs, which is why the lower supports. I think it’d be pretty hard to break the joint at the top of the legs, but this table will likely be subjected to a lot of abuse aboard a party boat. I was happy to make the modification if it meant a sale.

With that one gone, I thought I’d make another table.  This one, though, will have a Colorado state flag.

Colorado state flag

Why Colorado? Two reasons:

  1. I grew up there, and love the place.
  2. It’s another easy flag, like the Texas flag and the American flag.

I need a scalable vector graphic (.svg) file for the flag, and whereas the ones I’ve downloaded look okay to the eye, every one I’ve examined seemed slightly imperfect. I need that “C” to be centered perfectly, or my flag is going to look kind of wonky. At least, that’s what I thought.

So I went looking for the flag’s specifications. Here’s the language from the Colorado Revised Statutes, Title 24, Article 80, Part 9.

A state flag is hereby adopted to be used on all occasions when the state is officially and publicly represented, with the privilege of use by all citizens upon such occasions as they may deem fitting and appropriate. The flag shall consist of three alternate stripes to be of equal width and at right angles to the staff, the two outer stripes to be blue of the same color as in the blue field of the national flag and the middle stripe to be white, the proportion of the flag being a width of two-thirds of its length. At a distance from the staff end of the flag of one-fifth of the total length of the flag there shall be a circular red C, of the same color as the red in the national flag of the United States. The diameter of the letter shall be two-thirds of the width of the flag. The inner line of the opening of the letter C shall be three-fourths of the width of its body or bar, and the outer line of the opening shall be double the length of the inner line thereof. Completely filling the open space inside the letter C shall be a golden disk; attached to the flag shall be a cord of gold and silver intertwined, with tassels one of gold and one of silver. All penalties provided by the laws of this state for the misuse of the national flag shall be applicable to the said state flag.

Yeah, that’s a mouthful. Let’s take a closer look at the dimensions:

  1. “The flag shall consist of three alternate stripes to be of equal width and at right angles to the staff,”
    Easy enough.
  2. “…the proportion of the flag being a width of two-thirds of its length.”
    So if the flag is 1 unit long, then the flag is 2/3 unit wide. That makes each stripe 1/3 of 2/3 units, or 2/9 units, which is … my head explodes. Let’s say that each stripe is one unit wide. That makes the flag three units wide and 4.5 units long. That should be easier to deal with.
  3. “At a distance from the staff end of the flag of one-fifth of the total length of the flag there shall be a circular red C…”
    So the left edge of the ‘C’ is 4.5/5, or 0.9 units from the left edge.
  4. “The diameter of the letter shall be two-thirds of the width of the flag.”
    Well that’s easy enough. The flag is three units wide, so the circle is two units in diameter.
  5. ” The inner line of the opening of the letter C shall be three-fourths of the width of its body or bar,”
    Now this is a problem. It says that the line is 3/4 the width of the letter’s body, but it doesn’t say how wide the body should be. And is that “inner line” the arc of the inner line? Or is it the length of the chord connecting the endpoints of that arc?
  6. “…and the outer line of the opening shall be double the length of the inner line thereof.”
    Not a problem, once I know the answer to #5.
  7. “Completely filling the open space inside the letter C shall be a golden disk;”
    And just how large should that disk be? We can infer it from #5, but that requirement is insufficiently specified.

If could be that #5 and #6 are sufficient to define the width of the letter and as a result the diameter of the inner golden disk. The Wikipedia article says, “On March 31, 1964, the legislature further dictated the diameter of the gold disc to be equal to the center stripe.” I don’t see that in the statute, so perhaps there is only one solution that satisfies the conditions. I’ll have to study the geometry.

Note also that nothing in the language says anything about the vertical placement of the ‘C’. As far as I can tell, that letter could be placed anywhere along that vertical line, 1/5 of the length from the left edge of the flag.

The Colorado flag with three stripes and the red ‘C’ with gold interior was first described by legislation in 1911. It was revised in 1929 to say that the red and blue must be the same colors as the U.S. flag, but the “golden” color doesn’t appear to be specified anywhere. The size of the ‘C’ wasn’t specified until 1964. I find it curious and somewhat amusing that the statute doesn’t define the vertical position, and that the rest of the description is so wonky. You’d think they would have hired a consultant to clarify the language so that anybody with a ruler and compass could easily draw the flag.

Good thing I paid attention in high school geometry, huh?

Drawing attention to or hiding errors

I started my programming career writing COBOL programs for banks. One of my early tasks had me writing a program that would send a letter to all of our loan customers, informing them of a change in the payment notices. Included in that letter was to be a sample of the new payment coupon, which was to be clearly marked as a sample so that nobody would think it was bill and send a payment.

My design for the sample coupon included the words NOT AN INVOICE and NO PAYMENT REQUIRED and DO NOT PAY FROM THIS SAMPLE printed in several places. In addition, the account number printed on the coupon was something innocuous like “123-456-789”, and the customer’s name and address on all the coupons were the same:

Stanley Steamroller
123 Main St.
Thule, Greenland

And the amount due was “$123.45”.

My boss had me take that to the branch manager for approval. The manager praised my good thinking for including the NOT AN INVOICE and other wording, and the obviously fake name and address. His comment: “I was worried that customers might complain about an extra payment notice, but what you have here is clearly a sample. Nobody will be confused by this.”

To my knowledge, nobody called to complain that they had already made their payment and that they didn’t appreciate this erroneous invoice. We did, however, receive several checks for $123.45, with the account number 123-456-789 written in the Memo field, nicely packaged up with the sample payment coupon. It’s fortunate that the checks had the senders’ addresses on them. Otherwise we would not have known who to contact.

The first lesson I learned from this experience is that some people see only what they expect to see ( “Oh, look, a loan payment notice from the bank. Guess I’ll pay it.”). Later (with a similar mailing some months later) I learned that if you want people to stop and think about what they’re looking at, make a glaring error. If I had made that amount $7,743,926.59, I suspect nobody would have sent a check. We might have had a few calls from irate customers saying that they couldn’t possibly owe seven million dollars on their $15,000 loan, but it’s likely that they’d examine the notice a little more carefully before picking up the phone.

If you want people to notice something in a document, make an error that’s impossible to miss. That’ll force them to look more carefully at the rest of the page.

Oddly enough, the converse of that is also true in some situations. When preparing for room inspections at military school, I’d purposely leave something out of order. I wouldn’t make it too obvious, but it’d be something that the upperclassmen always looked for, and that was commonly in error. I found that if I tried to make my room perfect, those guys would spend entirely too much time looking for something wrong. But if I made one or two easy-to-find errors, they’d find the discrepancy, mark it down, and then leave the room happy that they did their jobs.

I think the difference is expectation. When somebody sends you information that you assume to be correct (like a statement from your bank), a glaring error makes you examine the rest of the information more carefully. But an upperclassman who’s looking for an opportunity to gig a subordinate will stop as soon as he’s found an error or two. He has proven his superiority and he has other rooms to inspect.

I’ve heard that the tactic works for tax auditors, too: give him an obvious reason to make you pay a little extra tax, and he’ll give the rest of your records a cursory glance before declaring everything in order. He’s proven his worth, so he can pack up his calculator and head off to torture his next victim.

Leadership and development teams

When he can, a leader will explain to his subordinates the reasons for his orders. Not because he has to, but because he knows that people usually do a better job when they know why they’re doing it. In addition, the leader’s subordinates are then willing to follow orders that the leader can’t explain (usually due to time pressure or enforced secrecy) because they trust that he has good reasons.

The leader earns obedience through mutual trust and respect.

Leadership is about building and directing a team: a group of people who trust and respect each other, and work together towards a common goal. It’s also about giving team members the information and authority they need to get the job done, and then letting them do it. The person who insists on being involved in the minutiae of every team members’ job is no leader at all, but rather a meddler who kills morale and destroys the team’s performance.

The best leaders are those who appear to do nothing at all, but behind the scenes they’re getting the team members the information and access they need, and are clearing obstacles from the team’s path before the rest of the team members even know those obstacles exist.

A well-lead team can function without the leader for a surprisingly long time. If the leader has done his job, then his team’s way has been cleared of any looming obstacles, they have everything they need to complete their current tasks, and a good idea of what they’ll be doing next.

In short: the leader doesn’t do the work; he makes it possible for his team to do the work.

The above is as true for a programming team lead as it is for the leader of a military unit or a senior manager of a Fortune 500 company. The team’s jobs might be completely different, but the team leader’s job is essentially the same: create an environment that makes it possible for the team to get the job done, and then step back. Observe. Give encouragement and direction when necessary. But let the team members do the things they’ve been hired to do.

It’s been my experience that most software development teams do not have good leaders. It shows in missed release dates, high bug counts, frequent “hotfix” releases, team member dissatisfaction and flouting of imposed standards, and high levels of past-due technical debt. The result is bloated, crufty, unstable code characterized by tight coupling, low cohesion, quick fixes, special cases, and convoluted dependencies. All to the detriment of the product.

All too often, the “leader” of the team has upper management convinced that the type of software his team is building is somehow different, and those problems are unavoidable in the company’s particular case. He probably believes it himself. As long as management accepts that, they will continue to experience missed delivery dates and their users will continue to be unhappy.

It’s all about context

The C# using directive and implicitly typed local variables (i.e. using var) are Good Things whose use should be encouraged in C# programs, not prohibited or severely limited. Used correctly (and it’s nearly impossible to use them incorrectly), they reduce noise and improve understanding, leading to better, more maintainable code. Limiting or prohibiting their use causes clutter, wastes programmers’ time, and leads to programmer dissatisfaction.

I’m actually surprised that I find it necessary to write the above as though it’s some new revelation, when in fact the vast majority of the C# development community agrees with it. Alas, there is at least one shop–the only one I’ve ever encountered, and I’ve worked with a lot of C# development houses–that severely limits the use of those two essential language features.

Consider this small C# program that generates a randomized list of numbers from 0 through 99, and then does something with those numbers:

    namespace MyProgram
    {
        public class Program
        {
            static public void Main()
            {
                System.Collections.Generic.List<int> myList = new System.Collections.Generic.List<int>();
                System.Random rnd = new System.Random();
                for (int i = 0; i < 100; ++i)
                {
                    myList.Add(rnd.Next(100));
                }
                System.Collections.Generic.List<int> newList = RandomizeList(myList, rnd);

                // now do something with the randomized list
            }
            
            static private System.Collections.Generic.List<int> RandomizeList(
                System.Collections.Generic.List<int> theList,
                System.Random rnd)
            {
                System.Collections.Generic.List<int> newList = new System.Collections.Generic.List<int>(theList);
                for (int i = theList.Count-1; i > 0; --i)
                {
                    int r = rnd.Next(i+1);
                    int temp = newList[r];
                    newList[r] = newList[i];
                    newList[i] = temp;
                }
                return newList;
            }
        }
    }

I know that’s not the most efficient code, but runtime efficiency is not really the point here. Bear with me.

Now imagine if I were telling you a story about an experience I shared with my friends Joe Green and Bob Smith:

Yesterday, I went to Jose’s Mexican Restaurant with my friends Joe Green and Bob Smith. After the hostess seated us, Joe Green ordered a Mexican Martini, Bob Smith ordered a Margarita, and I ordered a Negra Modelo. For dinner, Joe Green had the enchilada special, Bob Smith had Jose’s Taco Platter . . .

Wouldn’t you get annoyed if, throughout the story, I kept referring to my friends by their full names? How about if I referred to the restaurant multiple times as “Jose’s Mexican Restaurant” rather than shortening it to “Jose’s” after the first mention?

The first sentence establishes context: who I was with and where we were. If I then referred to my friends as “Joe” and “Bob,” there would be no ambiguity. If I were to write 50 pages about our experience at the restaurant, nobody would get confused if I never mentioned my friends’ last names after the first sentence. There could be ambiguity if my friends were Joe Smith and Joe Green, but even then I could finesse it so that I didn’t always have to supply their full names.

Establishing context is a requirement for effective communication. But once established, there is no need to re-establish it if nothing changes. If there’s only one Joe, then I don’t have to keep telling you which Joe I’m referring to. Doing so interferes with communication because it introduces noise into the system, reducing the signal-to-noise ratio.

If you’re familiar with C#, most likely the first thing that jumps out at you in the code sample above is the excessive use of full namespace qualification: all those repetitions of System.Collections.Generic.List that clutter the code and make it more difficult to read and understand.

Fortunately, the C# using directive allows us to establish context, thereby reducing the amount of noise in the signal:

    using System;
    using System.Collections.Generic;
    namespace MyProgram
    {
        public class Program
        {
            static public void Main()
            {
                List<int> myList = new List<int>();
                Random rnd = new Random();
                for (int i = 0; i < 100; ++i)
                {
                    myList.Add(rnd.Next(100));
                }
                List<int> newList = RandomizeList(myList, rnd);

                // now do something with the randomized list
            }
            
            static private List<int> RandomizeList(
                List<int> theList,
                Random rnd)
            {
                List<int> newList = new List<int>(theList);
                for (int i = theList.Count-1; i > 0; --i)
                {
                    int r = rnd.Next(i+1);
                    int temp = newList[r];
                    newList[r] = newList[i];
                    newList[i] = temp;
                }
                return newList;
            }
        }
    }

That’s a whole easier to read because you don’t have to parse the full type name to find the part that’s important. Although it might not make much of a difference in this small program, it makes a huge difference when you’re looking at code that uses a lot of objects, all of whose type names begin with MyCompany.MyApplication.MySubsystem.MyArea.

The use of using is ubiquitous in C# code. Every significant Microsoft example uses it. Every bit of open source code I’ve ever seen uses it. Every C# project I’ve ever worked on uses it. I wouldn’t think of not using it. Nearly every C# programmer I’ve ever met, traded email with, or seen post on StackOverflow and other programming forums uses it. Even the very few who don’t generally use it, do bend the rules, usually when LINQ is involved, and for extension methods in general.

I find it curious that extension methods are excepted from this rule. I’ve seen extension methods cause a whole lot more programmer confusion than the using directive ever has. Eliminating most in-house created extension methods would actually improve the code I’ve seen in most C# development shops.

The arguments against employing the using directive mostly boil down to, “but I can’t look at a code snippet and know immediately what the fully qualified name is.” And I agree: somebody who is unfamiliar with the C# base class library or with the libraries being used in the current project will not have the context. But it’s pretty unlikely that a company will hire an inexperienced C# programmer for a mid-level job, and anybody the company hires will be unfamiliar with the project’s class layout. In either case, a new guy might find the full qualification useful for a week or two. After that, he’s going to be annoyed by all the extra typing, and having to wade through noise in order to find what he’s looking for.

As with having friends named Joe Green and Joe Smith, there is potential for ambiguity if you have identically named classes in separate namespaces. For example, you might have an Employee class in your business layer and an Employee class in your persistence layer. But if your code is written rationally, there will be very few source files in which you refer to both. And in those, you can either revert to fully-qualified type names, or you can use namespace aliases:

    using BusinessEmployee = MyCompany.MyProject.BusinessLogic.Employee;
    using PersistenceEmployee = MyCompany.MyProject.Persistence.Employee;

Neither is perfect, but either one is preferable to mandating full type name specification in the entire project.

The code example still has unnecessary redundancy in it that impedes understanding. Consider this declaration:

    List<int> myList = new List<int>();

That’s like saying, “This variable called ‘myList’ is a list of integers, and I’m going to initialize it as an empty list of integers.” You can say the same thing more succinctly:

    var myList = new List<int>();

That eliminates the redundancy without reducing understanding.

There is some minor debate in the community about using var (i.e. implicit typing) in other situations, such as:

    var newList = RandomizeList(myList, rnd);

Here, it’s not plainly obvious what newList is, because we don’t know what RandomizeList returns. The compiler knows, of course, so the code isn’t ambiguous, but we mere humans can’t see it immediately. But if you’re using Visual Studio or any other modern IDE, you can hover your mouse over the RandomizeList, and a tooltip will appear, showing you the return type. And if you’re not using a modern IDE to write your C# code, you have a whole host of other problems that are way more pressing than whether or not a quick glance at the code will reveal a function’s return type.

I’ve heard people say, “I use var whenever the type is obvious, or when it’s necessary.” The “necessary” part is a smokescreen. What they really mean is, “whenever I call a LINQ method.” That is:

    var oddNumbers = myList.Select(i => (i%2) != 0);

The truth is, they could easily have written:

    IEnumerable<int> oddNumbers = . . .

The only time var is absolutely necessary is when you’re working with anonymous types. For example, this code that creates a type that contains the index and the value:

    var oddNumbers = myList.Select((i, val) => new {Index = i, Value = val});

I used to be in the “only with LINQ” camp, by the way. But after a while I realized that most of the time the return type was perfectly obvious from how the result was used, and in the few times it wasn’t, a quick mouse hover revealed it. I’m now firmly in the “use implicit typing wherever it’s allowed” camp, and my coding has improved as a result. With the occasional exception of code snippets taken out of context, I’ve never encountered a case in which using type inference made code harder to understand.

If you find yourself working in one of the hopefully very few shops that restricts the use of these features, you should actively encourage debate and attempt to change the policy.

If you’re one of the hopefully very few people attempting to enforce such a policy (and I say “attempting” because I’ve yet to see such a policy successfully enforced), you should re-examine your reasoning. I think you’ll find that the improvements in code quality and programmer effectiveness that result from the use of these features far outweigh the rare minor inconveniences you encounter.

How to waste a developer’s time

A company I won’t name recently instituted a policy that has their two senior developers spending a few hours per week–half hour to an hour every day–examining code pushed to the source repository by other programmers. At first I thought it was a good idea, if a bit excessive. Then I learned what they’re concentrating on.

Would you believe they’re concentrating on things like indentation, naming conventions, and other such mostly lexical constructs? That’s right, they’re paying senior developers to spend something like 10% of their time doing a job that can be done more accurately, more completely, and much faster by an inexpensive tool that is triggered with every check-in.

And they wonder why they can’t meet ship dates.

Style issues are important, but you don’t pay your best developers to play StyleCop. If they’re going to review others’ code, they should be concentrating on system architecture issues, correctness, testability, maintainability, critical performance issues, and a whole host of high-level issues that it takes actual human intelligence to evaluate.

Having your best developers spend time acting like mindless automatons is not only a huge waste of money and talent, it’s almost guaranteed to cost you a senior developer or two. I hope, anyway. Forcing any developer to play static analyzer on a regular basis is one of the fastest ways to crush his spirit. And any developer who’s worth the money you’re paying him will be working for your competitor in no time flat rather than waste his time doing monkey work.

The drawbacks of having your senior developers spend their time examining code for style issues:

  • They can’t do as good a job as StyleCop and other tools.
  • They can’t do it as fast as StyleCop and other tools.
  • They can’t do it on every check-in.
  • There are much more important things for them to spend their time on.
  • They hate doing it.
  • They will find somewhere else to work. Some place that values their knowledge and experience, and gives them interesting problems to solve.

Benefits of having your senior developers spend their time examining code for style issues:

  • ??

Jim’s building furniture?

I might have mentioned a while back that Debra bought me a one-year membership to TechShop for Christmas last year. She also gave me a gift certificate for five classes. The way TechShop works, a member has to take a class covering Safety and Basic Use before using most of the equipment. The five classes I’ve taken so far are Wood Shop SBU (covers table saw, bandsaw, sander, drill press), Jointer, Planer, and Router, Wood Lathe, Laser Cutter, and Sandblasting and Powder coating.

Since January I’ve spent lots of time at TechShop, mostly cutting up logs for carving wood. I have a bandsaw at home, but it can’t cut material thicker than six inches, and its throat depth is something less than twelve inches. The bandsaw at TechShop can cut a twelve-inch-thick log, and has a much deeper throat. I can cut bigger stuff on their bandsaw.

In late August, I saw that my neighbor who owns a handyman business had a trailer full of wood. It turns out that he had removed an old cedar fence and deck for a client, and was going to haul the wood off to the dump. I spent six hours pulling nails, and drove home with a truckload of reclaimed lumber. Then I got busy making things.

truckload

I had seen an American flag made from old fencing, and wanted to make one. Given that I now had plenty of old fence to play with, I cut a bunch of one-inch strips, painted them up, and glued them back together. The result is this flag, which is 13 inches tall and 25 inches wide, matching the official width to height ratio of 1.9.

flag1

I’ve since made two others that size, and one smaller. Then I stepped up to 1.5 inch strips to make a flag that’s 19.5″ x 37, and got the crazy idea that it’d look better in a coffee table than it would hanging on the wall. So, blissfully ignorant of what I was getting into, I set about building a coffee table.

The only thing I’d built from scratch before was that oval table, and it got covered with a tablecloth. Besides, all I did was slightly modify a work table plan that I found online. I designed this table myself, and set the goal of making from 100% reclaimed wood. I’ll save the step-by-step instructions for another time. Suffice it to say that it took me an absurdly long time and a lot of mistakes to finish constructing it. But the result is, I think, very nice.

flagTable_t

The table is 17.5 inches tall, and measures approximately 42″ x 26″. The only wood on it that’s not reclaimed is the eight dowels used to hold the apron together.

Mind you, the table is constructed, but it’s not finished. The flag is recessed about 1/8 inch below the border. I’m going to fill that space with epoxy resin. Most likely, the resin will also flow over the border to protect it. I’ll use an oil finish, probably something like tung oil.

I couldn’t build just one table, though. So shortly after I finished that one I made a Texas state flag. Last night I completed construction of my Texas table, which measures about 42″ x 30″, and is also 17.5 inches tall. It’s wider than the other table due to the Texas flag’s width-to-height ratio of 3 to 2. So a 36 inch wide flag is 24 inches tall.

texasTable_t

This table, too, has the flag recessed, and the space will be filled with epoxy resin.

The construction of the Texas table is a little different than the first one. Most importantly to me, I constructed the apron with finger joints rather than using dowels at the corners. That allows me to say that the table is 100% reclaimed wood. Plus, the finger joints are easier. Dowels are a pain in the neck.

I’m going to make at least one more of each of these tables, probably using the Texas table design. But before I do that I need to work on a project for Debra. That one’s going to be especially fun because I’ll get to play with the ShopBot. Once I take the class, that is.

 

Cleaning up some code

One thing about inheriting a large code base that’s been worked on by many people over a long period of time is that it’s usually full of learning opportunities. Or perhaps teaching opportunities. Whatever, he other day I ran into a real whopper.

Imagine you have a list of names and associated identification numbers. Items in the list are instances of this class:

    public class Item
    {
        public string Name {get; set;}
        public int Id {get; set;}
    }

We now need a method that, given a list of items and a name, will search the list and return the identifier. Or, if the name doesn’t exist in the list, will return the identifier of the default item, named “Default”.

Ignore for the moment that expecting there to be a default item in the list is almost certainly a bad design decision. Assume that the program you’re working with has that restriction and there’s nothing you can do about it.

I’ve changed the names and simplified things a little bit by removing details of the class that aren’t relevant to the example, but that’s essentially the problem that the code I ran into had to solve. Here’s how the programmer approached that problem.

    public int GetIdFromName(List items, string name)
    {
        int returnValue;
	
        if (items.Where(x => x.Name.ToLower() == name.ToLower()).Count() > 0)
        {
            returnValue = items.Where(x => x.Name.ToLower() == name.ToLower()).FirstOrDefault().Id;
	}
        else
        {
            returnValue = items.Where(x => x.Name.ToLower() == "Default".ToLower()).FirstOrDefault().Id;
        }
        return returnValue;
    }

That code, although it fulfills the requirements, is flawed in many ways. First of all, it’s entirely too complicated and verbose for what it does. Remember, the high level task can be expressed by this pseudocode:

    if (list contains an item with the specified name)
        return that item's Id field
    else
        return the Id field of the item with name "Default"

Let’s look at that piece by piece.

Consider the first if statement:

    if (items.Where(x => x.Name.ToLower() == name.ToLower().Count()) > 0))

This code enumerates the entire list, counting the number of items whose Name property matches the supplied name parameter. But all we really want to know is if there is at least one matching member in the list. There’s no need to count them all. So rather than checking for Count() > 0, we can call Any, which will stop enumerating the first time it finds a match.

    if (items.Where(x => x.Name.ToLower() == name.ToLower()).Any())

And in fact there’s an Any overload that takes a predicate. So we can get rid of the Where, too:

    if (items.Any(x => x.Name.ToLower() == name.ToLower()))

So, if there is an item, then the code scans the list again to get the matching item. That line, too, has an extraneous Where that we can replace with a FirstOrDefault(predicate). So the first lines become:

    if (items.Any(x => x.Name.ToLower() == name.ToLower()))
    {
        returnValue = items.FirstOrDefault(x => x.Name.ToLower() == name.ToLower()).Id;
    }

That simplifies things a bit, but it seems silly to go through the list one time looking to see if something is there, and then go through it again to actually pick out the item. Much better to do a single scan of the list:

    int returnValue;
    var item = items.FirstOrDefault(x => x.Name.ToLower() == name.ToLower());
    if (item != null)
    {
        returnValue = item.Id;
    }

In the else part, we can replace the Where(predicate).FirstOrDefault with FirstOrDefault(predicate), just as above. If we know that an item with the name “Default” will always be in the list, we can replace the call to FirstOrDefault with a call to First:

    else
    {
        returnValue = items.First(x => x.Name.ToLower() == "Default".ToLower()).Id;
    }

I have several problems with the expression: x => x.Name.ToLower() == name.ToLower(). First, I don’t like having to write it twice in that method. It’d be too easy to make a mistake and have the expressions end up doing different things. Second, the == operator for strings always uses the current culture: “current” meaning the CultureInfo settings on the machine that’s currently running the program. That’s not typically a problem, but different cultures have different case conversion rules. It’s best to use an invariant culture for things like this. See my article for more information about why.

So what I would do is create a Func<string, bool> that does the comparison, and pass it as a predicate to First and FirstOrDefault. Like this:

    var comparer =
        new Func<string, string, bool>((s1,s2) =>
            string.Compare(s1, s2, StringComparison.InvariantCultureIgnoreCase) == 0);

    var item = items.FirstOrDefault(x => comparer(x.Name, name));
    if (item != null)
    {
        returnValue = item.Id;
    }
    else
    {
        returnValue = items.First(x => comparer(x.Name, "Default")).Id;
    }
    return returnValue;

Neat, right? Except that I’ve been burned enough in the past not to trust statements like, “There will always be a ‘Default’ item.” In my expreience, “always” too often becomes “sometimes” or “not at all”. I like to think that I’m cautious. Others tend to call me paranoid or at minimum overly concerned with unlikely possibilities. Whatever the case, this code will die horribly if there is no “Default” item; it crashes with NullReferenceException.

As I’ve said before, exceptions should point the finger at the culprit. In this case, a missing “Default” item means that the list is improperly formatted, which probably points to some kind of data corruption. The code should throw an exception that identifies the real problem rather than relying on the runtime to throw a misleading NullReferenceException. I would change the code that explicitly checks for the missing “Default” case, and make it throw a meaningful exception.

The completed code looks like this:

    public int GetIdFromName(List items, string name)
    {
        var comparer =
            new Func<string, string, bool>((s1,s2) => 
                string.Compare(s1, s2, StringComparison.InvariantCultureIgnoreCase) == 0);

        var item = items.FirstOrDefault(x => comparer(x.Name, name));
        if (item != null)
        {
            return item.Id;
        }

        // Check for Default
        item = items.FirstOrDefault(x >= comparer(x.Name, "Default"));
        if (item != null)
        {
            return item.Id;
        }

        // Neither is there. Throw a meaningful exception.

        throw new InvalidOperationException("The Items list does not contain a default item.");
    }

I like that code because it’s easy to read and is very explicit in its error checking and in the message it outputs if there’s a problem. It’s a few more lines of C#, but it’s a whole lot easier to read and prove correct, and it handles the lack of a “Default” much more reasonably.

That solution is intellectually dissatisfying, though, because it enumerates the list twice if the requested name isn’t found. If I don’t use LINQ, I can easily do this with a single scan of the list:

    public int GetIdFromName2(List<Item> items, string name)
    {
        var comparer =
            new Func<string, string, bool>((s1,s2) =>
                string.Compare(s1, s2, StringComparison.InvariantCultureIgnoreCase) == 0);

        Item defaultItem = null;

        foreach (var item in items)
        {
            if (comparer(item.Name, name))
            {
                return item.Id;
            }
            if (defaultItem == null && comparer(item.Name, "Default"))
            {
                defaultItem = item;
            }
        }
        
        if (defaultItem != null)
        {
            return defaultItem.Id;
        }

        throw new InvalidOperationException("The Items list does not contain a default item.");
    }

Try as I might, I can’t come up with a simple LINQ solution that scans the list only once. The best I’ve come up with is a complicated call to Enumerable.Aggregate that looks something like this:

    public int GetIdFromName3(List items, string name)
    {
        var comparer =
            new Func<string, string, bool>(
                (s1, s2) => string.Compare(s1, s2, StringComparison.InvariantCultureIgnoreCase) == 0);

        Item foo = null;
        var result =
            items.Aggregate(
                new {item = foo, isDefault = false},
                (current, next) =>
                {
                    if (current.item == null)
                    {
                        if (comparer(next.Name, name)) return new {item = next, isDefault = false};
                        if (comparer(next.Name, "Default")) return new {item = next, isDefault = true};
                        return current;
                    }
                    // current item is not null.
                    // if it's a default item, then check to see if the next item is a match for name.
                    if (current.isDefault && comparer(next.Name, name)) return new {item = next, isDefault = false};

                    // otherwise just return the current item
                    return current;
                });
        if (result.item != null)
            return result.item.Id;

        throw new InvalidOperationException("The Items list does not contain a default item.");
    }

That should work, but it’s decidedly ugly and requires a full scan of the list. If I saw that in production code, I’d probably tell the programmer to rewrite it. Of the two other LINQ solutions I considered, one involves sorting with a custom comparer that puts the desired item at the front of the result, and the other involves calling ToLookup to create a lookup table. Both are similarly ugly, require a lot of extra memory, and also require a full scan of the list.

If you can come up with a single, simple LINQ expression that fulfills the requirements, I’d sure like to see it.

Notes from the commute

As I mentioned this morning, I’m testing out commuting options. Today’s experiment was to ride the train from Lakeline Station to downtown, and then make the return trip (20 miles) on my bike. The train ride in was, as usual, uneventful. The in-train bike rack is easy enough to use. I rode the six blocks from the downtown station to the office, and was able to store the bike in a corner out of the way.

The ride home is best described as a series of short notes.

  • Jim’s first rule of bicycle commuting is, “You forgot something.” I forgot my sunglasses. A pretty minor omission, really. Not like forgetting my helmet or shoes.
  • I waited five minutes for the elevator before I got annoyed and walked the bike out to the parking garage and just rode down the ramp. Next time I’ll go directly to the parking garage.
  • Most of the ride back to Lakeline is along a route that I used to travel regularly when I worked at the State Capitol 10 or 12 years ago.
  • Riding in downtown Austin is okay as long as you stay in the bike lanes and stay alert.
  • The street after 39th Street is 39-1/2 street, not 40th Street as one would expect. I don’t recall seeing 38-1/2 Street. Guess I should pay better attention to street signs.
  • A long section of Shoal Creek Blvd. is closed for road construction. Fortunately there are detour signs.
  • If you value your life, do not ride Allendale Road during rush hour, regardless of what the detour signs tell you.
  • Due to traffic signals and construction detours, the first third of the ride takes almost half the time.
  • The hills are taller and steeper than they were the last time I rode that route.
  • There are more bike lanes than there used to be.
  • Two bottles of water is not enough when it’s hot. That’s what convenience stores are for.
  • A quart of bottled water costs 50% more than it did 10 years ago.
  • 20 miles on one of the hottest days of the summer (over 100 degrees) probably isn’t how I should have started after not riding for almost a year.
  • Good thing I had a meeting until 5:30. If I had left at my normal time an hour earlier, I might have melted.
  • Google Maps did a credible job of mapping the route for me. I made a few small changes.
  • My legs are tired. They’ll be sore tomorrow.

All that said, I’m hopeful that I can make the same ride on Thursday, although I might cut the trip short: ride to one of the train stations closer to downtown, and take that up to Lakeline. It depends on how my legs feel Thursday.

 

Three rides for the price of two

I got a new job recently, doing server-side programming for a mobile games company. The big change is that the company’s office is in downtown Austin, about 25 miles from our house. More importantly, it’s a 40 minute drive in light traffic. In normal rush hour traffic the drive is … well, it could be 45 minutes or it could be two hours if there’s a wreck. But even relatively light rush hour traffic is hard on the nerves.

So I’ve started riding the train to work. It’s a six mile drive from the house to the train station, a 40 minute ride to downtown, and a six block walk to the office. I get to the office with a fresh mind, rather than being frazzled by dealing with traffic. And the trip home is pretty relaxing. Most days my brain actually comes through the door with my body.

The only possible drawback is the cost. A one-way ticket on the train is $3.50, and a 24-hour rail pass is $7.00. So for the last two weeks my daily commute has cost $7.00 for the train and about a half gallon of gas getting to and from the train station. Not terrible, really, as driving to work and back would cost me about two gallons of gas. At the current price of $2.50 per gallon, that makes the train trip $8.25 and driving about $5.00. If I had to pay for parking in downtown Austin, the cost of driving would be prohibitive. Fortunately, the company I work for pays for garage parking.

I didn’t take into account wear and tear on the vehicle, but that’s not going to add up to $3.00 per day. On a purely financial level, driving to work is less expensive than riding the train. It’s hard to put a number on my mental health, though. I seriously dislike driving in traffic. I don’t know if Debra has noticed it, but I feel much more relaxed and less irritable on the days I ride the train.

Discount passes are available. For $26.50 I can get a 7-day commuter pass, or about $5.30 per working day. For a little less than $100, I can get a 30-day pass. If you figure 22 working days a month, that works out to around $4.50 per day. That’s cheaper than driving, even at today’s low gas prices. I think I can get that monthly pass through the cafeteria plan with pre-tax dollars, which would reduce it to something like $75, or about $3.50 per working day. The only drawback to these commuter passes is that I incur a commitment. For the weekly pass, I lose money if I don’t ride the train at least four days out of the week. The monthly pass at retail would require me to ride 15 days out of the month, or 11 days if I can get it pre-tax.

It occurred to me last week that I can adjust my schedule just a bit and save some money on train rides. Remember that my daily commuting ticket is actually a 24 hour pass: it’s good for 24 hours from the time I buy it. So if I were to buy the pass at 7:00 AM Monday, I can ride the train to work and back. But the pass is still good until 7:00 AM Tuesday. If I catch one of the two earlier trains (6:08 AM or 6:49 AM), I don’t have to pay for a ticket!

Tuesday afternoon I buy a ticket for the late (5:30) train and ride home. Wednesday morning I take the train in, and Wednesday afternoon my ticket is still good as long as I catch a train before 5:30. All told, I’ve made three round trips for the price of two: a 33% discount. If I go the whole week like that, I end up paying for three round trips and one one-way (the trip home Friday night), for a total for $24.50, or $4.90 per day. That’s cheaper than driving, and I don’t incur a long-term commitment.

But I can do better. What happens if I take my bike on the train on Tuesday morning, and ride the bike back to the station that evening rather than riding the train? If I do the same thing on Thursday, then I end up buying three tickets per week (total, $21.00). For that I get five morning trips to downtown and three trips home. Riding my bike home isn’t a particular hardship unless it’s cold or raining, and I need the exercise anyway. So my commuting cost is $21.00 per week, or $4.20 per day. That’s cheaper than driving and cheaper than the monthly rail pass, but not as good as the pre-tax plan. But I also I get a 20 or 25 mile bike ride in twice per week, and it only costs me a little extra time. Sounds like a win to me.

I’m testing that this week. Monday morning I took the later (7:00 AM) train to the office, and today I brought my bike along when I caught the 6:08 AM train. So this afternoon I either buy a return ticket or I ride the bike back. It’s going to be hot, but I don’t mind. I have plenty of water.

Rebuilding a bench

Driving home from work one day in April, I spied this bench sitting in front of a neighbor’s house. It had a sign on it that said, “Free. Just needs new boards.”

bench_640

It certainly needed new boards. The metal also had some rust and I figured if I was going to the trouble of disassembling the thing to add new boards, I’d refinish the metal, too.

The next question was what wood to use for the boards. About a week after I acquired the bench, I was out at my friend Mike’s place. He had an old cedar post that he had no use for. Said he’d been saving it for me because he thought I might want it for carving wood.

So I took the post to TechShop and cut it into boards. Here it is on the bandsaw, shortly after I started cutting it up.

cedar_640

And here are the rough cut boards, all of which are between one and one and a half inches thick. They’re about five feet long and almost eight inches wide.boardsI put the boards up in the garage rafters to dry for a while (about two months), and put the bench aside. I also took the TechShop class on sand blasting and powder coating so that I could use that equipment when it came time to finish the project.

I had some time on my hands last week because I found myself between jobs for a week. So one day I went to TechShop with the metal pieces and the boards, planning to finish the bench. It took almost four hours to sandblast the sides and back, and another three hours to powder coat them. A long time, but the results were worth it.

My next task was to dimension the lumber: plane it down to 3/4″ thickness and cut the slats to size (2″ wide by 48″ long). Planing went as planned and when I finished I went to check out the table saw key. But the saw was down due to a faulty safety switch.

Two days later the saw was up and I got the boards cut to size. Then I set up a jig on the drill press and put the screw holes in the end. At about 6:00 that evening I put the bench together for a test fit.

 

testbench_640

I was so happy the way it turned out. I knew I’d have to take it apart, of course, to put a finish on the wood, but it looked so nice! Until I sat on it and sagged. Turns out that, although beautiful, that red cedar isn’t nearly as strong as whatever wood was originally on the bench. I’d have to strengthen the boards or the bench would be just for show.

So the next day I went back to TechShop and spent some time adding a spine to the bottom of each seat slat:

spine

The spine is simply a piece of wood that’s 1″ wide by 3/4″ thick, and about three inches shorter than the slat. The spline is attached with screws and wood glue (and on this one, a couple of dowels). The spline is attached vertically (i.e. it’s one inch tall in this picture), and more than doubles the strength of the slat. By now it was Friday night and I had to let the glue dry for at least 24 hours before applying a finish.

I applied the finish on Sunday morning: two coats of Teak Oil. I let that cure for about 10 hours, and Debra helped me put everything together.

benchdone_640The bench now sits in a flower bed in the front yard.

More than anything, this project was a learning experience for me. I had to learn how to use the sandblasting cabinet (no trouble, really), and how to powder coat something. Although the powder coating turned out okay, there are some things I could have done better. I also learned how to dimension the lumber and make sure that all of the boards were exactly the same size. Even setting up the drill press and making sure the holes were all in their right places took a few tries. I was smart enough to use some scrap wood for that because I had only enough wood for one mistake. And that ended up being used because one of the slats ended up with weak spot that would have broken the first time I sat on it.

And I certainly didn’t save any money on the thing. These benches are available band new for $70 or $80 online or in local stores. I spent nearly $50 on the brass screws! The powder for the powder coating was another $20, although I have about half of that left over. All told, the thing probably cost me $100 out of pocket for various hardware items and tools, not counting the cost of the class. Much of that, of course, can be amortized over many projects. It’s not like I’ll have to pay for another class when it comes time to powder coat the porch furniture that’s losing its paint.

It took a good four or five days of work between cutting up the lumber, stripping the paint prior to sandblasting, and then the sandblasting, powder coating, and extra wood work to add the spines. Learning new things takes time.

But it’s the best looking bench of its type!

Besides, I’ve said before that no self-respecting hobbyist would pay $20 for something he can build himself for $50.

It really was a good learning experience, and I could probably do it again for much less money and in a lot less time. If I run across another of these benches free for the taking, I’ll pick it up. I don’t have enough cedar to do another one, although I could likely get more if I wanted to. I do, however, have plenty of other types of wood that would look good on such a bench.

 

 

A chance meeting

Debra and I went to Phoenix last week to attend a court hearing (a family matter), and to get some things she had in storage. The trip itself was not exceptional except for one thing.

When we arrived at the court house on Tuesday, Debra went to the information desk to find out which room the hearing would be held in. I was seated not far away answering a work-related mail message on my phone, but within earshot. I could hear that people were talking and even pick out Debra’s voice, but I wasn’t paying attention to what was being said. But then I heard another voice that I sounded very familiar.

When I looked up Debra was walking away from the information desk, and the familiar voice was coming from a Sheriff’s deputy at the desk who was looking in her direction: from my perspective facing more away than in exact profile. I suppose you could say that I was seeing his right rear quarter.

The voice, combined with the profile and the uniform, prompted me to walk toward him and say, “Bob?” He turned around and, sure enough, the deputy was indeed Bob: a guy I knew in military school. We’d met at a few reunions since then, and when I saw him eight or ten years ago he was a deputy with the Maricopa County Sheriff.

We chatted for a while and marveled at the unlikely meeting. Bob’s take on things was “It’s a small world,” and I couldn’t disagree.

But on the way back from the hearing I got to wondering, “what are the odds?” More precisely, what was the probability that I would run into that person at that time. At first it seemed highly unlikely, but then I did a little calculation.

The number of Sheriff’s deputies in Maricopa County is something less than 1,000. If you assume that we had to encounter a deputy, then the odds of it being Bob were, at worst, one in 1,000. That’s not so very unlikely. If you narrow it down to deputies who have that court duty (probably fewer than 100), then it’s only one in 100. And if you restrict it to deputies who have duty in that particular court building, then it’s probably better than one in ten.

Given the conditions, that I ran into Bob is not terribly surprising. It wouldn’t seem surprising at all if I lived in Phoenix and knew that his current assignment was at that building. In fact, I probably would have been looking for him when I entered the building and might even have been surprised if I didn’t see him.

Mathematics aside, a seemingly random encounter with an old friend was a pleasant surprise.

Categories

A sample text widget

Etiam pulvinar consectetur dolor sed malesuada. Ut convallis euismod dolor nec pretium. Nunc ut tristique massa.

Nam sodales mi vitae dolor ullamcorper et vulputate enim accumsan. Morbi orci magna, tincidunt vitae molestie nec, molestie at mi. Nulla nulla lorem, suscipit in posuere in, interdum non magna.