Drawing attention to or hiding errors

I started my programming career writing COBOL programs for banks. One of my early tasks had me writing a program that would send a letter to all of our loan customers, informing them of a change in the payment notices. Included in that letter was to be a sample of the new payment coupon, which was to be clearly marked as a sample so that nobody would think it was bill and send a payment.

My design for the sample coupon included the words NOT AN INVOICE and NO PAYMENT REQUIRED and DO NOT PAY FROM THIS SAMPLE printed in several places. In addition, the account number printed on the coupon was something innocuous like “123-456-789”, and the customer’s name and address on all the coupons were the same:

Stanley Steamroller
123 Main St.
Thule, Greenland

And the amount due was “$123.45”.

My boss had me take that to the branch manager for approval. The manager praised my good thinking for including the NOT AN INVOICE and other wording, and the obviously fake name and address. His comment: “I was worried that customers might complain about an extra payment notice, but what you have here is clearly a sample. Nobody will be confused by this.”

To my knowledge, nobody called to complain that they had already made their payment and that they didn’t appreciate this erroneous invoice. We did, however, receive several checks for $123.45, with the account number 123-456-789 written in the Memo field, nicely packaged up with the sample payment coupon. It’s fortunate that the checks had the senders’ addresses on them. Otherwise we would not have known who to contact.

The first lesson I learned from this experience is that some people see only what they expect to see ( “Oh, look, a loan payment notice from the bank. Guess I’ll pay it.”). Later (with a similar mailing some months later) I learned that if you want people to stop and think about what they’re looking at, make a glaring error. If I had made that amount $7,743,926.59, I suspect nobody would have sent a check. We might have had a few calls from irate customers saying that they couldn’t possibly owe seven million dollars on their $15,000 loan, but it’s likely that they’d examine the notice a little more carefully before picking up the phone.

If you want people to notice something in a document, make an error that’s impossible to miss. That’ll force them to look more carefully at the rest of the page.

Oddly enough, the converse of that is also true in some situations. When preparing for room inspections at military school, I’d purposely leave something out of order. I wouldn’t make it too obvious, but it’d be something that the upperclassmen always looked for, and that was commonly in error. I found that if I tried to make my room perfect, those guys would spend entirely too much time looking for something wrong. But if I made one or two easy-to-find errors, they’d find the discrepancy, mark it down, and then leave the room happy that they did their jobs.

I think the difference is expectation. When somebody sends you information that you assume to be correct (like a statement from your bank), a glaring error makes you examine the rest of the information more carefully. But an upperclassman who’s looking for an opportunity to gig a subordinate will stop as soon as he’s found an error or two. He has proven his superiority and he has other rooms to inspect.

I’ve heard that the tactic works for tax auditors, too: give him an obvious reason to make you pay a little extra tax, and he’ll give the rest of your records a cursory glance before declaring everything in order. He’s proven his worth, so he can pack up his calculator and head off to torture his next victim.

Leadership and development teams

When he can, a leader will explain to his subordinates the reasons for his orders. Not because he has to, but because he knows that people usually do a better job when they know why they’re doing it. In addition, the leader’s subordinates are then willing to follow orders that the leader can’t explain (usually due to time pressure or enforced secrecy) because they trust that he has good reasons.

The leader earns obedience through mutual trust and respect.

Leadership is about building and directing a team: a group of people who trust and respect each other, and work together towards a common goal. It’s also about giving team members the information and authority they need to get the job done, and then letting them do it. The person who insists on being involved in the minutiae of every team members’ job is no leader at all, but rather a meddler who kills morale and destroys the team’s performance.

The best leaders are those who appear to do nothing at all, but behind the scenes they’re getting the team members the information and access they need, and are clearing obstacles from the team’s path before the rest of the team members even know those obstacles exist.

A well-lead team can function without the leader for a surprisingly long time. If the leader has done his job, then his team’s way has been cleared of any looming obstacles, they have everything they need to complete their current tasks, and a good idea of what they’ll be doing next.

In short: the leader doesn’t do the work; he makes it possible for his team to do the work.

The above is as true for a programming team lead as it is for the leader of a military unit or a senior manager of a Fortune 500 company. The team’s jobs might be completely different, but the team leader’s job is essentially the same: create an environment that makes it possible for the team to get the job done, and then step back. Observe. Give encouragement and direction when necessary. But let the team members do the things they’ve been hired to do.

It’s been my experience that most software development teams do not have good leaders. It shows in missed release dates, high bug counts, frequent “hotfix” releases, team member dissatisfaction and flouting of imposed standards, and high levels of past-due technical debt. The result is bloated, crufty, unstable code characterized by tight coupling, low cohesion, quick fixes, special cases, and convoluted dependencies. All to the detriment of the product.

All too often, the “leader” of the team has upper management convinced that the type of software his team is building is somehow different, and those problems are unavoidable in the company’s particular case. He probably believes it himself. As long as management accepts that, they will continue to experience missed delivery dates and their users will continue to be unhappy.

It’s all about context

The C# using directive and implicitly typed local variables (i.e. using var) are Good Things whose use should be encouraged in C# programs, not prohibited or severely limited. Used correctly (and it’s nearly impossible to use them incorrectly), they reduce noise and improve understanding, leading to better, more maintainable code. Limiting or prohibiting their use causes clutter, wastes programmers’ time, and leads to programmer dissatisfaction.

I’m actually surprised that I find it necessary to write the above as though it’s some new revelation, when in fact the vast majority of the C# development community agrees with it. Alas, there is at least one shop–the only one I’ve ever encountered, and I’ve worked with a lot of C# development houses–that severely limits the use of those two essential language features.

Consider this small C# program that generates a randomized list of numbers from 0 through 99, and then does something with those numbers:

    namespace MyProgram
        public class Program
            static public void Main()
                System.Collections.Generic.List<int> myList = new System.Collections.Generic.List<int>();
                System.Random rnd = new System.Random();
                for (int i = 0; i < 100; ++i)
                System.Collections.Generic.List<int> newList = RandomizeList(myList, rnd);

                // now do something with the randomized list
            static private System.Collections.Generic.List<int> RandomizeList(
                System.Collections.Generic.List<int> theList,
                System.Random rnd)
                System.Collections.Generic.List<int> newList = new System.Collections.Generic.List<int>(theList);
                for (int i = theList.Count-1; i > 0; --i)
                    int r = rnd.Next(i+1);
                    int temp = newList[r];
                    newList[r] = newList[i];
                    newList[i] = temp;
                return newList;

I know that’s not the most efficient code, but runtime efficiency is not really the point here. Bear with me.

Now imagine if I were telling you a story about an experience I shared with my friends Joe Green and Bob Smith:

Yesterday, I went to Jose’s Mexican Restaurant with my friends Joe Green and Bob Smith. After the hostess seated us, Joe Green ordered a Mexican Martini, Bob Smith ordered a Margarita, and I ordered a Negra Modelo. For dinner, Joe Green had the enchilada special, Bob Smith had Jose’s Taco Platter . . .

Wouldn’t you get annoyed if, throughout the story, I kept referring to my friends by their full names? How about if I referred to the restaurant multiple times as “Jose’s Mexican Restaurant” rather than shortening it to “Jose’s” after the first mention?

The first sentence establishes context: who I was with and where we were. If I then referred to my friends as “Joe” and “Bob,” there would be no ambiguity. If I were to write 50 pages about our experience at the restaurant, nobody would get confused if I never mentioned my friends’ last names after the first sentence. There could be ambiguity if my friends were Joe Smith and Joe Green, but even then I could finesse it so that I didn’t always have to supply their full names.

Establishing context is a requirement for effective communication. But once established, there is no need to re-establish it if nothing changes. If there’s only one Joe, then I don’t have to keep telling you which Joe I’m referring to. Doing so interferes with communication because it introduces noise into the system, reducing the signal-to-noise ratio.

If you’re familiar with C#, most likely the first thing that jumps out at you in the code sample above is the excessive use of full namespace qualification: all those repetitions of System.Collections.Generic.List that clutter the code and make it more difficult to read and understand.

Fortunately, the C# using directive allows us to establish context, thereby reducing the amount of noise in the signal:

    using System;
    using System.Collections.Generic;
    namespace MyProgram
        public class Program
            static public void Main()
                List<int> myList = new List<int>();
                Random rnd = new Random();
                for (int i = 0; i < 100; ++i)
                List<int> newList = RandomizeList(myList, rnd);

                // now do something with the randomized list
            static private List<int> RandomizeList(
                List<int> theList,
                Random rnd)
                List<int> newList = new List<int>(theList);
                for (int i = theList.Count-1; i > 0; --i)
                    int r = rnd.Next(i+1);
                    int temp = newList[r];
                    newList[r] = newList[i];
                    newList[i] = temp;
                return newList;

That’s a whole easier to read because you don’t have to parse the full type name to find the part that’s important. Although it might not make much of a difference in this small program, it makes a huge difference when you’re looking at code that uses a lot of objects, all of whose type names begin with MyCompany.MyApplication.MySubsystem.MyArea.

The use of using is ubiquitous in C# code. Every significant Microsoft example uses it. Every bit of open source code I’ve ever seen uses it. Every C# project I’ve ever worked on uses it. I wouldn’t think of not using it. Nearly every C# programmer I’ve ever met, traded email with, or seen post on StackOverflow and other programming forums uses it. Even the very few who don’t generally use it, do bend the rules, usually when LINQ is involved, and for extension methods in general.

I find it curious that extension methods are excepted from this rule. I’ve seen extension methods cause a whole lot more programmer confusion than the using directive ever has. Eliminating most in-house created extension methods would actually improve the code I’ve seen in most C# development shops.

The arguments against employing the using directive mostly boil down to, “but I can’t look at a code snippet and know immediately what the fully qualified name is.” And I agree: somebody who is unfamiliar with the C# base class library or with the libraries being used in the current project will not have the context. But it’s pretty unlikely that a company will hire an inexperienced C# programmer for a mid-level job, and anybody the company hires will be unfamiliar with the project’s class layout. In either case, a new guy might find the full qualification useful for a week or two. After that, he’s going to be annoyed by all the extra typing, and having to wade through noise in order to find what he’s looking for.

As with having friends named Joe Green and Joe Smith, there is potential for ambiguity if you have identically named classes in separate namespaces. For example, you might have an Employee class in your business layer and an Employee class in your persistence layer. But if your code is written rationally, there will be very few source files in which you refer to both. And in those, you can either revert to fully-qualified type names, or you can use namespace aliases:

    using BusinessEmployee = MyCompany.MyProject.BusinessLogic.Employee;
    using PersistenceEmployee = MyCompany.MyProject.Persistence.Employee;

Neither is perfect, but either one is preferable to mandating full type name specification in the entire project.

The code example still has unnecessary redundancy in it that impedes understanding. Consider this declaration:

    List<int> myList = new List<int>();

That’s like saying, “This variable called ‘myList’ is a list of integers, and I’m going to initialize it as an empty list of integers.” You can say the same thing more succinctly:

    var myList = new List<int>();

That eliminates the redundancy without reducing understanding.

There is some minor debate in the community about using var (i.e. implicit typing) in other situations, such as:

    var newList = RandomizeList(myList, rnd);

Here, it’s not plainly obvious what newList is, because we don’t know what RandomizeList returns. The compiler knows, of course, so the code isn’t ambiguous, but we mere humans can’t see it immediately. But if you’re using Visual Studio or any other modern IDE, you can hover your mouse over the RandomizeList, and a tooltip will appear, showing you the return type. And if you’re not using a modern IDE to write your C# code, you have a whole host of other problems that are way more pressing than whether or not a quick glance at the code will reveal a function’s return type.

I’ve heard people say, “I use var whenever the type is obvious, or when it’s necessary.” The “necessary” part is a smokescreen. What they really mean is, “whenever I call a LINQ method.” That is:

    var oddNumbers = myList.Select(i => (i%2) != 0);

The truth is, they could easily have written:

    IEnumerable<int> oddNumbers = . . .

The only time var is absolutely necessary is when you’re working with anonymous types. For example, this code that creates a type that contains the index and the value:

    var oddNumbers = myList.Select((i, val) => new {Index = i, Value = val});

I used to be in the “only with LINQ” camp, by the way. But after a while I realized that most of the time the return type was perfectly obvious from how the result was used, and in the few times it wasn’t, a quick mouse hover revealed it. I’m now firmly in the “use implicit typing wherever it’s allowed” camp, and my coding has improved as a result. With the occasional exception of code snippets taken out of context, I’ve never encountered a case in which using type inference made code harder to understand.

If you find yourself working in one of the hopefully very few shops that restricts the use of these features, you should actively encourage debate and attempt to change the policy.

If you’re one of the hopefully very few people attempting to enforce such a policy (and I say “attempting” because I’ve yet to see such a policy successfully enforced), you should re-examine your reasoning. I think you’ll find that the improvements in code quality and programmer effectiveness that result from the use of these features far outweigh the rare minor inconveniences you encounter.

How to waste a developer’s time

A company I won’t name recently instituted a policy that has their two senior developers spending a few hours per week–half hour to an hour every day–examining code pushed to the source repository by other programmers. At first I thought it was a good idea, if a bit excessive. Then I learned what they’re concentrating on.

Would you believe they’re concentrating on things like indentation, naming conventions, and other such mostly lexical constructs? That’s right, they’re paying senior developers to spend something like 10% of their time doing a job that can be done more accurately, more completely, and much faster by an inexpensive tool that is triggered with every check-in.

And they wonder why they can’t meet ship dates.

Style issues are important, but you don’t pay your best developers to play StyleCop. If they’re going to review others’ code, they should be concentrating on system architecture issues, correctness, testability, maintainability, critical performance issues, and a whole host of high-level issues that it takes actual human intelligence to evaluate.

Having your best developers spend time acting like mindless automatons is not only a huge waste of money and talent, it’s almost guaranteed to cost you a senior developer or two. I hope, anyway. Forcing any developer to play static analyzer on a regular basis is one of the fastest ways to crush his spirit. And any developer who’s worth the money you’re paying him will be working for your competitor in no time flat rather than waste his time doing monkey work.

The drawbacks of having your senior developers spend their time examining code for style issues:

  • They can’t do as good a job as StyleCop and other tools.
  • They can’t do it as fast as StyleCop and other tools.
  • They can’t do it on every check-in.
  • There are much more important things for them to spend their time on.
  • They hate doing it.
  • They will find somewhere else to work. Some place that values their knowledge and experience, and gives them interesting problems to solve.

Benefits of having your senior developers spend their time examining code for style issues:

  • ??

Jim’s building furniture?

I might have mentioned a while back that Debra bought me a one-year membership to TechShop for Christmas last year. She also gave me a gift certificate for five classes. The way TechShop works, a member has to take a class covering Safety and Basic Use before using most of the equipment. The five classes I’ve taken so far are Wood Shop SBU (covers table saw, bandsaw, sander, drill press), Jointer, Planer, and Router, Wood Lathe, Laser Cutter, and Sandblasting and Powder coating.

Since January I’ve spent lots of time at TechShop, mostly cutting up logs for carving wood. I have a bandsaw at home, but it can’t cut material thicker than six inches, and its throat depth is something less than twelve inches. The bandsaw at TechShop can cut a twelve-inch-thick log, and has a much deeper throat. I can cut bigger stuff on their bandsaw.

In late August, I saw that my neighbor who owns a handyman business had a trailer full of wood. It turns out that he had removed an old cedar fence and deck for a client, and was going to haul the wood off to the dump. I spent six hours pulling nails, and drove home with a truckload of reclaimed lumber. Then I got busy making things.


I had seen an American flag made from old fencing, and wanted to make one. Given that I now had plenty of old fence to play with, I cut a bunch of one-inch strips, painted them up, and glued them back together. The result is this flag, which is 13 inches tall and 25 inches wide, matching the official width to height ratio of 1.9.


I’ve since made two others that size, and one smaller. Then I stepped up to 1.5 inch strips to make a flag that’s 19.5″ x 37, and got the crazy idea that it’d look better in a coffee table than it would hanging on the wall. So, blissfully ignorant of what I was getting into, I set about building a coffee table.

The only thing I’d built from scratch before was that oval table, and it got covered with a tablecloth. Besides, all I did was slightly modify a work table plan that I found online. I designed this table myself, and set the goal of making from 100% reclaimed wood. I’ll save the step-by-step instructions for another time. Suffice it to say that it took me an absurdly long time and a lot of mistakes to finish constructing it. But the result is, I think, very nice.


The table is 17.5 inches tall, and measures approximately 42″ x 26″. The only wood on it that’s not reclaimed is the eight dowels used to hold the apron together.

Mind you, the table is constructed, but it’s not finished. The flag is recessed about 1/8 inch below the border. I’m going to fill that space with epoxy resin. Most likely, the resin will also flow over the border to protect it. I’ll use an oil finish, probably something like tung oil.

I couldn’t build just one table, though. So shortly after I finished that one I made a Texas state flag. Last night I completed construction of my Texas table, which measures about 42″ x 30″, and is also 17.5 inches tall. It’s wider than the other table due to the Texas flag’s width-to-height ratio of 3 to 2. So a 36 inch wide flag is 24 inches tall.


This table, too, has the flag recessed, and the space will be filled with epoxy resin.

The construction of the Texas table is a little different than the first one. Most importantly to me, I constructed the apron with finger joints rather than using dowels at the corners. That allows me to say that the table is 100% reclaimed wood. Plus, the finger joints are easier. Dowels are a pain in the neck.

I’m going to make at least one more of each of these tables, probably using the Texas table design. But before I do that I need to work on a project for Debra. That one’s going to be especially fun because I’ll get to play with the ShopBot. Once I take the class, that is.


Cleaning up some code

One thing about inheriting a large code base that’s been worked on by many people over a long period of time is that it’s usually full of learning opportunities. Or perhaps teaching opportunities. Whatever, he other day I ran into a real whopper.

Imagine you have a list of names and associated identification numbers. Items in the list are instances of this class:

    public class Item
        public string Name {get; set;}
        public int Id {get; set;}

We now need a method that, given a list of items and a name, will search the list and return the identifier. Or, if the name doesn’t exist in the list, will return the identifier of the default item, named “Default”.

Ignore for the moment that expecting there to be a default item in the list is almost certainly a bad design decision. Assume that the program you’re working with has that restriction and there’s nothing you can do about it.

I’ve changed the names and simplified things a little bit by removing details of the class that aren’t relevant to the example, but that’s essentially the problem that the code I ran into had to solve. Here’s how the programmer approached that problem.

    public int GetIdFromName(List items, string name)
        int returnValue;
        if (items.Where(x => x.Name.ToLower() == name.ToLower()).Count() > 0)
            returnValue = items.Where(x => x.Name.ToLower() == name.ToLower()).FirstOrDefault().Id;
            returnValue = items.Where(x => x.Name.ToLower() == "Default".ToLower()).FirstOrDefault().Id;
        return returnValue;

That code, although it fulfills the requirements, is flawed in many ways. First of all, it’s entirely too complicated and verbose for what it does. Remember, the high level task can be expressed by this pseudocode:

    if (list contains an item with the specified name)
        return that item's Id field
        return the Id field of the item with name "Default"

Let’s look at that piece by piece.

Consider the first if statement:

    if (items.Where(x => x.Name.ToLower() == name.ToLower().Count()) > 0))

This code enumerates the entire list, counting the number of items whose Name property matches the supplied name parameter. But all we really want to know is if there is at least one matching member in the list. There’s no need to count them all. So rather than checking for Count() > 0, we can call Any, which will stop enumerating the first time it finds a match.

    if (items.Where(x => x.Name.ToLower() == name.ToLower()).Any())

And in fact there’s an Any overload that takes a predicate. So we can get rid of the Where, too:

    if (items.Any(x => x.Name.ToLower() == name.ToLower()))

So, if there is an item, then the code scans the list again to get the matching item. That line, too, has an extraneous Where that we can replace with a FirstOrDefault(predicate). So the first lines become:

    if (items.Any(x => x.Name.ToLower() == name.ToLower()))
        returnValue = items.FirstOrDefault(x => x.Name.ToLower() == name.ToLower()).Id;

That simplifies things a bit, but it seems silly to go through the list one time looking to see if something is there, and then go through it again to actually pick out the item. Much better to do a single scan of the list:

    int returnValue;
    var item = items.FirstOrDefault(x => x.Name.ToLower() == name.ToLower());
    if (item != null)
        returnValue = item.Id;

In the else part, we can replace the Where(predicate).FirstOrDefault with FirstOrDefault(predicate), just as above. If we know that an item with the name “Default” will always be in the list, we can replace the call to FirstOrDefault with a call to First:

        returnValue = items.First(x => x.Name.ToLower() == "Default".ToLower()).Id;

I have several problems with the expression: x => x.Name.ToLower() == name.ToLower(). First, I don’t like having to write it twice in that method. It’d be too easy to make a mistake and have the expressions end up doing different things. Second, the == operator for strings always uses the current culture: “current” meaning the CultureInfo settings on the machine that’s currently running the program. That’s not typically a problem, but different cultures have different case conversion rules. It’s best to use an invariant culture for things like this. See my article for more information about why.

So what I would do is create a Func<string, bool> that does the comparison, and pass it as a predicate to First and FirstOrDefault. Like this:

    var comparer =
        new Func<string, string, bool>((s1,s2) =>
            string.Compare(s1, s2, StringComparison.InvariantCultureIgnoreCase) == 0);

    var item = items.FirstOrDefault(x => comparer(x.Name, name));
    if (item != null)
        returnValue = item.Id;
        returnValue = items.First(x => comparer(x.Name, "Default")).Id;
    return returnValue;

Neat, right? Except that I’ve been burned enough in the past not to trust statements like, “There will always be a ‘Default’ item.” In my expreience, “always” too often becomes “sometimes” or “not at all”. I like to think that I’m cautious. Others tend to call me paranoid or at minimum overly concerned with unlikely possibilities. Whatever the case, this code will die horribly if there is no “Default” item; it crashes with NullReferenceException.

As I’ve said before, exceptions should point the finger at the culprit. In this case, a missing “Default” item means that the list is improperly formatted, which probably points to some kind of data corruption. The code should throw an exception that identifies the real problem rather than relying on the runtime to throw a misleading NullReferenceException. I would change the code that explicitly checks for the missing “Default” case, and make it throw a meaningful exception.

The completed code looks like this:

    public int GetIdFromName(List items, string name)
        var comparer =
            new Func<string, string, bool>((s1,s2) => 
                string.Compare(s1, s2, StringComparison.InvariantCultureIgnoreCase) == 0);

        var item = items.FirstOrDefault(x => comparer(x.Name, name));
        if (item != null)
            return item.Id;

        // Check for Default
        item = items.FirstOrDefault(x >= comparer(x.Name, "Default"));
        if (item != null)
            return item.Id;

        // Neither is there. Throw a meaningful exception.

        throw new InvalidOperationException("The Items list does not contain a default item.");

I like that code because it’s easy to read and is very explicit in its error checking and in the message it outputs if there’s a problem. It’s a few more lines of C#, but it’s a whole lot easier to read and prove correct, and it handles the lack of a “Default” much more reasonably.

That solution is intellectually dissatisfying, though, because it enumerates the list twice if the requested name isn’t found. If I don’t use LINQ, I can easily do this with a single scan of the list:

    public int GetIdFromName2(List<Item> items, string name)
        var comparer =
            new Func<string, string, bool>((s1,s2) =>
                string.Compare(s1, s2, StringComparison.InvariantCultureIgnoreCase) == 0);

        Item defaultItem = null;

        foreach (var item in items)
            if (comparer(item.Name, name))
                return item.Id;
            if (defaultItem == null && comparer(item.Name, "Default"))
                defaultItem = item;
        if (defaultItem != null)
            return defaultItem.Id;

        throw new InvalidOperationException("The Items list does not contain a default item.");

Try as I might, I can’t come up with a simple LINQ solution that scans the list only once. The best I’ve come up with is a complicated call to Enumerable.Aggregate that looks something like this:

    public int GetIdFromName3(List items, string name)
        var comparer =
            new Func<string, string, bool>(
                (s1, s2) => string.Compare(s1, s2, StringComparison.InvariantCultureIgnoreCase) == 0);

        Item foo = null;
        var result =
                new {item = foo, isDefault = false},
                (current, next) =>
                    if (current.item == null)
                        if (comparer(next.Name, name)) return new {item = next, isDefault = false};
                        if (comparer(next.Name, "Default")) return new {item = next, isDefault = true};
                        return current;
                    // current item is not null.
                    // if it's a default item, then check to see if the next item is a match for name.
                    if (current.isDefault && comparer(next.Name, name)) return new {item = next, isDefault = false};

                    // otherwise just return the current item
                    return current;
        if (result.item != null)
            return result.item.Id;

        throw new InvalidOperationException("The Items list does not contain a default item.");

That should work, but it’s decidedly ugly and requires a full scan of the list. If I saw that in production code, I’d probably tell the programmer to rewrite it. Of the two other LINQ solutions I considered, one involves sorting with a custom comparer that puts the desired item at the front of the result, and the other involves calling ToLookup to create a lookup table. Both are similarly ugly, require a lot of extra memory, and also require a full scan of the list.

If you can come up with a single, simple LINQ expression that fulfills the requirements, I’d sure like to see it.

Notes from the commute

As I mentioned this morning, I’m testing out commuting options. Today’s experiment was to ride the train from Lakeline Station to downtown, and then make the return trip (20 miles) on my bike. The train ride in was, as usual, uneventful. The in-train bike rack is easy enough to use. I rode the six blocks from the downtown station to the office, and was able to store the bike in a corner out of the way.

The ride home is best described as a series of short notes.

  • Jim’s first rule of bicycle commuting is, “You forgot something.” I forgot my sunglasses. A pretty minor omission, really. Not like forgetting my helmet or shoes.
  • I waited five minutes for the elevator before I got annoyed and walked the bike out to the parking garage and just rode down the ramp. Next time I’ll go directly to the parking garage.
  • Most of the ride back to Lakeline is along a route that I used to travel regularly when I worked at the State Capitol 10 or 12 years ago.
  • Riding in downtown Austin is okay as long as you stay in the bike lanes and stay alert.
  • The street after 39th Street is 39-1/2 street, not 40th Street as one would expect. I don’t recall seeing 38-1/2 Street. Guess I should pay better attention to street signs.
  • A long section of Shoal Creek Blvd. is closed for road construction. Fortunately there are detour signs.
  • If you value your life, do not ride Allendale Road during rush hour, regardless of what the detour signs tell you.
  • Due to traffic signals and construction detours, the first third of the ride takes almost half the time.
  • The hills are taller and steeper than they were the last time I rode that route.
  • There are more bike lanes than there used to be.
  • Two bottles of water is not enough when it’s hot. That’s what convenience stores are for.
  • A quart of bottled water costs 50% more than it did 10 years ago.
  • 20 miles on one of the hottest days of the summer (over 100 degrees) probably isn’t how I should have started after not riding for almost a year.
  • Good thing I had a meeting until 5:30. If I had left at my normal time an hour earlier, I might have melted.
  • Google Maps did a credible job of mapping the route for me. I made a few small changes.
  • My legs are tired. They’ll be sore tomorrow.

All that said, I’m hopeful that I can make the same ride on Thursday, although I might cut the trip short: ride to one of the train stations closer to downtown, and take that up to Lakeline. It depends on how my legs feel Thursday.


Three rides for the price of two

I got a new job recently, doing server-side programming for a mobile games company. The big change is that the company’s office is in downtown Austin, about 25 miles from our house. More importantly, it’s a 40 minute drive in light traffic. In normal rush hour traffic the drive is … well, it could be 45 minutes or it could be two hours if there’s a wreck. But even relatively light rush hour traffic is hard on the nerves.

So I’ve started riding the train to work. It’s a six mile drive from the house to the train station, a 40 minute ride to downtown, and a six block walk to the office. I get to the office with a fresh mind, rather than being frazzled by dealing with traffic. And the trip home is pretty relaxing. Most days my brain actually comes through the door with my body.

The only possible drawback is the cost. A one-way ticket on the train is $3.50, and a 24-hour rail pass is $7.00. So for the last two weeks my daily commute has cost $7.00 for the train and about a half gallon of gas getting to and from the train station. Not terrible, really, as driving to work and back would cost me about two gallons of gas. At the current price of $2.50 per gallon, that makes the train trip $8.25 and driving about $5.00. If I had to pay for parking in downtown Austin, the cost of driving would be prohibitive. Fortunately, the company I work for pays for garage parking.

I didn’t take into account wear and tear on the vehicle, but that’s not going to add up to $3.00 per day. On a purely financial level, driving to work is less expensive than riding the train. It’s hard to put a number on my mental health, though. I seriously dislike driving in traffic. I don’t know if Debra has noticed it, but I feel much more relaxed and less irritable on the days I ride the train.

Discount passes are available. For $26.50 I can get a 7-day commuter pass, or about $5.30 per working day. For a little less than $100, I can get a 30-day pass. If you figure 22 working days a month, that works out to around $4.50 per day. That’s cheaper than driving, even at today’s low gas prices. I think I can get that monthly pass through the cafeteria plan with pre-tax dollars, which would reduce it to something like $75, or about $3.50 per working day. The only drawback to these commuter passes is that I incur a commitment. For the weekly pass, I lose money if I don’t ride the train at least four days out of the week. The monthly pass at retail would require me to ride 15 days out of the month, or 11 days if I can get it pre-tax.

It occurred to me last week that I can adjust my schedule just a bit and save some money on train rides. Remember that my daily commuting ticket is actually a 24 hour pass: it’s good for 24 hours from the time I buy it. So if I were to buy the pass at 7:00 AM Monday, I can ride the train to work and back. But the pass is still good until 7:00 AM Tuesday. If I catch one of the two earlier trains (6:08 AM or 6:49 AM), I don’t have to pay for a ticket!

Tuesday afternoon I buy a ticket for the late (5:30) train and ride home. Wednesday morning I take the train in, and Wednesday afternoon my ticket is still good as long as I catch a train before 5:30. All told, I’ve made three round trips for the price of two: a 33% discount. If I go the whole week like that, I end up paying for three round trips and one one-way (the trip home Friday night), for a total for $24.50, or $4.90 per day. That’s cheaper than driving, and I don’t incur a long-term commitment.

But I can do better. What happens if I take my bike on the train on Tuesday morning, and ride the bike back to the station that evening rather than riding the train? If I do the same thing on Thursday, then I end up buying three tickets per week (total, $21.00). For that I get five morning trips to downtown and three trips home. Riding my bike home isn’t a particular hardship unless it’s cold or raining, and I need the exercise anyway. So my commuting cost is $21.00 per week, or $4.20 per day. That’s cheaper than driving and cheaper than the monthly rail pass, but not as good as the pre-tax plan. But I also I get a 20 or 25 mile bike ride in twice per week, and it only costs me a little extra time. Sounds like a win to me.

I’m testing that this week. Monday morning I took the later (7:00 AM) train to the office, and today I brought my bike along when I caught the 6:08 AM train. So this afternoon I either buy a return ticket or I ride the bike back. It’s going to be hot, but I don’t mind. I have plenty of water.

Rebuilding a bench

Driving home from work one day in April, I spied this bench sitting in front of a neighbor’s house. It had a sign on it that said, “Free. Just needs new boards.”


It certainly needed new boards. The metal also had some rust and I figured if I was going to the trouble of disassembling the thing to add new boards, I’d refinish the metal, too.

The next question was what wood to use for the boards. About a week after I acquired the bench, I was out at my friend Mike’s place. He had an old cedar post that he had no use for. Said he’d been saving it for me because he thought I might want it for carving wood.

So I took the post to TechShop and cut it into boards. Here it is on the bandsaw, shortly after I started cutting it up.


And here are the rough cut boards, all of which are between one and one and a half inches thick. They’re about five feet long and almost eight inches wide.boardsI put the boards up in the garage rafters to dry for a while (about two months), and put the bench aside. I also took the TechShop class on sand blasting and powder coating so that I could use that equipment when it came time to finish the project.

I had some time on my hands last week because I found myself between jobs for a week. So one day I went to TechShop with the metal pieces and the boards, planning to finish the bench. It took almost four hours to sandblast the sides and back, and another three hours to powder coat them. A long time, but the results were worth it.

My next task was to dimension the lumber: plane it down to 3/4″ thickness and cut the slats to size (2″ wide by 48″ long). Planing went as planned and when I finished I went to check out the table saw key. But the saw was down due to a faulty safety switch.

Two days later the saw was up and I got the boards cut to size. Then I set up a jig on the drill press and put the screw holes in the end. At about 6:00 that evening I put the bench together for a test fit.



I was so happy the way it turned out. I knew I’d have to take it apart, of course, to put a finish on the wood, but it looked so nice! Until I sat on it and sagged. Turns out that, although beautiful, that red cedar isn’t nearly as strong as whatever wood was originally on the bench. I’d have to strengthen the boards or the bench would be just for show.

So the next day I went back to TechShop and spent some time adding a spine to the bottom of each seat slat:


The spine is simply a piece of wood that’s 1″ wide by 3/4″ thick, and about three inches shorter than the slat. The spline is attached with screws and wood glue (and on this one, a couple of dowels). The spline is attached vertically (i.e. it’s one inch tall in this picture), and more than doubles the strength of the slat. By now it was Friday night and I had to let the glue dry for at least 24 hours before applying a finish.

I applied the finish on Sunday morning: two coats of Teak Oil. I let that cure for about 10 hours, and Debra helped me put everything together.

benchdone_640The bench now sits in a flower bed in the front yard.

More than anything, this project was a learning experience for me. I had to learn how to use the sandblasting cabinet (no trouble, really), and how to powder coat something. Although the powder coating turned out okay, there are some things I could have done better. I also learned how to dimension the lumber and make sure that all of the boards were exactly the same size. Even setting up the drill press and making sure the holes were all in their right places took a few tries. I was smart enough to use some scrap wood for that because I had only enough wood for one mistake. And that ended up being used because one of the slats ended up with weak spot that would have broken the first time I sat on it.

And I certainly didn’t save any money on the thing. These benches are available band new for $70 or $80 online or in local stores. I spent nearly $50 on the brass screws! The powder for the powder coating was another $20, although I have about half of that left over. All told, the thing probably cost me $100 out of pocket for various hardware items and tools, not counting the cost of the class. Much of that, of course, can be amortized over many projects. It’s not like I’ll have to pay for another class when it comes time to powder coat the porch furniture that’s losing its paint.

It took a good four or five days of work between cutting up the lumber, stripping the paint prior to sandblasting, and then the sandblasting, powder coating, and extra wood work to add the spines. Learning new things takes time.

But it’s the best looking bench of its type!

Besides, I’ve said before that no self-respecting hobbyist would pay $20 for something he can build himself for $50.

It really was a good learning experience, and I could probably do it again for much less money and in a lot less time. If I run across another of these benches free for the taking, I’ll pick it up. I don’t have enough cedar to do another one, although I could likely get more if I wanted to. I do, however, have plenty of other types of wood that would look good on such a bench.



A chance meeting

Debra and I went to Phoenix last week to attend a court hearing (a family matter), and to get some things she had in storage. The trip itself was not exceptional except for one thing.

When we arrived at the court house on Tuesday, Debra went to the information desk to find out which room the hearing would be held in. I was seated not far away answering a work-related mail message on my phone, but within earshot. I could hear that people were talking and even pick out Debra’s voice, but I wasn’t paying attention to what was being said. But then I heard another voice that I sounded very familiar.

When I looked up Debra was walking away from the information desk, and the familiar voice was coming from a Sheriff’s deputy at the desk who was looking in her direction: from my perspective facing more away than in exact profile. I suppose you could say that I was seeing his right rear quarter.

The voice, combined with the profile and the uniform, prompted me to walk toward him and say, “Bob?” He turned around and, sure enough, the deputy was indeed Bob: a guy I knew in military school. We’d met at a few reunions since then, and when I saw him eight or ten years ago he was a deputy with the Maricopa County Sheriff.

We chatted for a while and marveled at the unlikely meeting. Bob’s take on things was “It’s a small world,” and I couldn’t disagree.

But on the way back from the hearing I got to wondering, “what are the odds?” More precisely, what was the probability that I would run into that person at that time. At first it seemed highly unlikely, but then I did a little calculation.

The number of Sheriff’s deputies in Maricopa County is something less than 1,000. If you assume that we had to encounter a deputy, then the odds of it being Bob were, at worst, one in 1,000. That’s not so very unlikely. If you narrow it down to deputies who have that court duty (probably fewer than 100), then it’s only one in 100. And if you restrict it to deputies who have duty in that particular court building, then it’s probably better than one in ten.

Given the conditions, that I ran into Bob is not terribly surprising. It wouldn’t seem surprising at all if I lived in Phoenix and knew that his current assignment was at that building. In fact, I probably would have been looking for him when I entered the building and might even have been surprised if I didn’t see him.

Mathematics aside, a seemingly random encounter with an old friend was a pleasant surprise.

I love this. It’s what? I hate that!

When we were kids, we spent a lot of our summer days at home, playing in the pool and jumping on the trampoline. And nearly every day, Mom would make sandwiches for our lunch, which she served outside on the patio picnic table. Those sandwiches were usually lunch meat: bologna, salami, or something similar, along with Miracle Whip and some lettuce, and maybe other stuff. The details are a little foggy now, 45 years later.

I do recall that at some point Mom began making the sandwiches with leaf spinach rather than lettuce. One day, after several days of eating these slightly modified sandwiches, my youngest sister, Melody, commented: “Mom, I really like this new lettuce!” That was a mistake.

You see, of the five of us, the three oldest (myself included) knew that the “new lettuce” was actually spinach. I’m not sure about Marie, who’s a year younger than I, but I know for certain that Melody had no idea that she had been eating spinach for the last few days. And of course my brother and I thought it was our duty to educate our sister. I’m not sure which one of us actually said, “That ‘new lettuce’ is actually leaf spinach.”

Melody looked up at us skeptically (we might have played some tricks on her before), and then looked at Marilyn (oldest sister) for confirmation. Marilyn had already done a face-palm, knowing what the reaction was going to be, and Melody took that as confirmation. She put her sandwich down and said, “Ewwww. I hate spinach!” She wouldn’t finish her sandwich and for weeks after that she’d carefully inspect whatever was put in front of her to ensure that Mom wasn’t trying to sneak something by. If she didn’t recognize it, she wouldn’t eat it.

Understand, Melody was maybe five or six years old at the time. So I guess I can cut her some slack.

Back in the late ’90s, a friend came to visit and Debra and I took her to have sushi. Our friend liked a particular type of sushi roll, and was excited to be having it again. I don’t remember exactly which roll it was, but one of the things she really liked about it was the crunchy texture and the taste of the masago (capelin roe) that was on the outside of the roll. Since she liked sushi and was ecstatic about having that roll, I figured she knew what she was eating. So I said something about fish eggs.

Her response was worse than Melody’s: she put down the piece she was holding, spit out what was in her mouth, and then drank a whole glass of water to get rid of the taste. This was after she’d already eaten two pieces of the roll while enthusiastically telling us how much she liked it. But after she found out that masago is fish eggs, she wouldn’t touch another bite.

Since then, I’ve seen similar reactions from many other people. I call it the, “I love this. It’s what? I hate that!” reaction. I can almost understand it with food, because I’ve been in the position of being told what something was after I ate it, and I felt the internal turmoil of having eaten something that I probably wouldn’t have eaten had I known what it was beforehand. But I can’t at all understand that reaction when applied to other things. Politics, for example.

I’ve actually seen conversations that went something like this:

Person 1: “That’s a really good idea.”

Person 2: “Yeah, when President Obama proposed it, I ….”

Person 1: “Obama proposed it? What a stupid idea!”

And, of course, several years ago I saw similar conversations, but with “Bush” replacing “Obama.”

I would find it funny if it weren’t so common. It seems as though, when it comes to politics, a large fraction of the American public is more interested in who the ideas come from than if the ideas have any merit. We call that “tribalism.” It’s stupid in the extreme.

What’s all the fuss about?

The Supreme Court ruled Friday that states may not discriminate against same-sex couples when issuing marriage licenses. There are many people who are up in arms about this, but their arguments don’t make sense to me. In addition, it seems that persons on both sides of the issue mischaracterize it.

In everything that I’ve read for or against same-sex marriage, the authors seem to be missing the most important point: marriage is actually two related, but definitely different things. Traditionally, marriage is a social and religious institution: a vow made by two people, involving a promise to love, honor, cherish, be faithful to, etc. in accordance with the rules and customs of their particular religion or social group.

But marriage is also a legal contract, with defined rights and responsibilities that vary from state to state.

The two, as I said, are related but definitely separate. For example, legal marriage doesn’t require a commitment to love, honor, cherish, etc. All it takes is two consenting adults to sign a paper in front of witnesses. Done and done. You are now legally married, with all of the rights and responsibilities that come with it. Granted, those rights and responsibilities are rather ill-defined, and they’re not spelled out in the marriage license or any other document that you’re required to sign, but that’s an entirely different matter.

That other type of marriage, the one recognized by churches or other social or cultural groups, typically requires some type of ceremony that takes place in front of the group invoking the blessing of the group or whatever deity they worship. That marriage is recognized by those members of the group, but without a properly signed state-issued marriage license it is not recognized as a binding legal contract.

In short, states can refuse to recognize church marriages that aren’t accompanied by a state-issued marriage license, and churches can refuse to recognize as married two people who have a state-approved marriage that doesn’t conform to the church’s teachings.

The Supreme Court’s decision applies only to the legal contract. All it says is that states can’t discriminate based on gender when it comes to issuing a marriage license and that based on the Equal Protection Clause of the Fourteenth Amendment, no state may fail to recognize a marriage that was performed legally in another state.

That’s a no-brainer. There’s no Constitutionally justifiable reason for denying same-sex couples the right to enter into a legal contract. And there is long precedent establishing that legally binding contracts executed in one state are recognized as legally binding in all other states.

Note that the ruling doesn’t force religious or other non-government organizations to recognize those unions as meeting the rules for marriages as defined by the group. Nor does it force your church or club or whatever to perform a marriage that doesn’t conform to your group’s rules.

I know that many people reading this will argue that this ruling is the first step on a slippery slope or group marriages, incestuous marriages, unions between an adult and a child, a woman and her cat, etc. In truth, I don’t see a Constitutionally justifiable reason for denying group marriages. But then, I don’t have a particular problem with multiple people entering into a legal contract. As for the others, neither a child nor a cat is a consenting adult, so the argument is idiotic. And in the matter of incestuous unions the appearance of coercion is enough for the state to dispute the claim of “consenting adult.”

Many of those who dispute the ruling bring up the matter of sex, saying that by allowing same-sex marriage, the state is officially condoning homosexuality. That’s not true at all. The legal contract says nothing about sex. It’s not like the marriage license is a license to have sex. People have sex out of wedlock on a regular basis. Some states might still have laws on the books prohibiting sex out of wedlock or certain sexual practices between two consenting adults, but none of them is enforceable. The state is officially indifferent to sex between consenting adults.

Supporters of the Court’s ruling have mischaracterized it. The ruling is not “a victory for love.” It’s a victory for equality, no doubt, but it has nothing at all to do with love. It was purely a Constitutional question, and the ruling addressed just that. The Court’s opinion written by Justice Kennedy, however, is a different thing, and I will address that in a future post.

Members of both groups have acted badly in the wake of Friday’s ruling. Granted, those opposed to the ruling have been by far the more vocal and vitriolic, but supporters have spewed a fair amount of invective themselves, even when confronted by reasoned and polite disagreement.

I will happily accept your comments on this post, including opinions that differ from mine. All I ask is that you keep it civil. Discussion, not argument. Also, if you’re going to discuss the Constitution, I suggest that you read it. In addition, you can save yourself some embarrassment if you read the Supreme Court’s decision. It’s helpful to read the entire thing, but the first five pages contain what really matters.

I am the sole arbiter of what is considered civil, and I will delete any post that I deem to be objectionable. You’re free to say whatever you like, and I’m free to choose what appears in the comments section of my blog. My page, my rules. If you want to act badly, find somewhere else to play.

Fun with the laser engraver

Monday afternoon I took the Safety and Basic Use class for the Trotec laser engraver at TechShop. The class consists mostly of “lecture”: going over safety considerations, the machine’s controls, and how to use the software (CorelDraw or Adobe Illustrator) to prepare files for sending to the engraver. It was mostly a demonstration with very little hands-on time.

So last night I went up to TechShop and reserved the laser engraver for two hours. The class instructor recommended spending some time doing simple things like cutting out basic shapes or engraving text on scrap material, but I figured I’d do a more ambitious project: engrave a picture onto something.

Having never used Illustrator or CorelDraw before, I spent a couple of hours fiddling with them prior to my scheduled time on the laser. I prepared the picture I was going to engrave and also spent some time just poking around in CorelDraw trying to get familiar with all it can do. I think it’s going to be a steep learning curve.

I also collected a few pieces of cardboard and a small piece of hardboard (Masonite) from the scrap pile. I figured I’d practice on scrap before doing this on an expensive piece of material.

The idea was to reproduce this picture on the laser engraver.

charlie_640I still think that’s the best picture I’ve ever taken of anything. The subject is highly photogenic and I got lucky with the composition.

I first converted the picture to grayscale and ran it through a “pencil sketch” filter to create this:

charlie_Pencil640I also played with the brightness and contrast a bit.

Then I put a piece of cardboard into the laser engraver and printed the picture. It took several passes, including fiddling with the speed and power settings between passes. The result was somewhat curious. This is what it looks like when viewed from directly above.

Charlie_Cardboard1That’s what I saw while watching it printing in the engraver. You can imagine my disappointment. But then I saw it from an angle, which was quite a surprise:

Charlie_Cardboard2I don’t really understand the physics. I know it has something to do with how the light is reflecting off the material, but I don’t know the specifics of it. It’s kind of a neat effect, though.

Aside from that curious effect, though, the picture still isn’t great. But my time was running out and I was annoyed with the cardboard anyway. I suspect that I had the engraver going too fast. I’ll reduce the settings from the recommended value the next time I try to print on cardboard.

Having successfully engraved the cardboard without destroying anything or starting a fire, I put the Masonite in there, input the recommended settings for that material, and printed again. The result was even lighter than on the cardboard. So I reduced the speed from 50 to 20 and re-printed. That produced a very nice picture:

charlie_MasoniteIt’s pixelated on purpose; I had set the thing to do 250 DPI. I might try it again sometime at a much higher resolution. But I’m really happy with the way this one turned out.

I’d call it a successful experiment. I learned a bit about how to use the software, and I got a neat picture of Charlie engraved on hardboard. Might have to hang that one on the wall.

Now, for my next project . . .






Tales from the trenches: Fat Bald Famous Person

The story is true. Names have been changed or omitted to protect the guilty.

When I was in the games business I worked on a game that was named after a famous person. Something like Famous Person’s Fantastic Game. I don’t remember the exact circumstance, but one day we on the development team were discussing the user interface with the game publisher’s representative. The topic of the avatar came up and the publisher’s representative said, in effect, “just don’t make Famous Person look bald or fat.”

So of course the first thing our UI guy did the next day was suck the avatar image into Photoshop, remove the hair, and add some extra pounds. We used that version of the program for internal testing for a couple of weeks. We all got a good laugh out of it.

As part of our contract, we had to submit the latest version of the program to the publisher periodically (every three or four weeks, as I recall) for evaluation. The publisher would typically give the build a once-over and then send a copy off to Famous Person’s representatives for … well, for something. Probably to make sure that we got the branding right, and that we didn’t show Famous Person in a bad light. On Friday afternoon, several weeks after our meeting with the publisher’s representative, we dutifully submitted a new version of the program which we assumed the recipient would review the following week.

About an hour after we sent the new build, we got a call from the publisher’s representative. He was in California, two time zones earlier. He had downloaded and installed the new version, and when he started the program the first thing he saw was Fat Bald Famous Person. As you can imagine, he was not at all happy with that. We immediately swapped out the avatar image, made a new build, and sent it off. The publisher’s representative wasn’t happy, but at least he stopped screaming.

We were lucky. Had the publisher’s representative just passed the new build off to Famous Person without first looking at it, the whole deal could have been blown. Famous Person could have canceled his contract with the publisher, who would be well within their rights not only to cancel our development contract but also sue us for causing the loss of the contract with Famous Person. Even if Famous Person didn’t see it, the publisher could have canceled our contract, taken the code, and had somebody else finish the project. Fortunately, our project manager and the publisher’s representative had a good relationship and we just got a stern lecture about doing stupid stuff like that.

Since then I’ve been very careful not to add “funny” messages or images to programs, even in the earliest stages of development. It’s tempting early on to use placeholder text for error messages, for example. Something like, “Uh oh. Something bad happened.” That’s a whole lot easier than trying to come up with a meaningful error message while my brain’s geared towards getting the overall algorithm right. The problem with such an approach is that I have to go back later and add the real error message. Anybody who’s familiar with product development knows that such tasks often fall through the cracks in the last-minute rush to ship. This is especially true when the text to be changed is an error message that might never be seen during testing.

Come to think of it, my primary task on that project included doing some significant user interface work. The user interface included many buttons, each of which required an image. When I told the project manager that I’m not a graphic artist he said, “Just put something there. We’ll have one of the artists create the real buttons.” If you’ve been in the games business, you’re familiar with programmer art, and you probably can imagine what my buttons looked like. Apparently they were “good enough,” though, because the game shipped with my ugly button images. I was appalled. Since then, if somebody tells me to “just put something there,” I make sure that the art I add is so ugly that nobody would even think of shipping the program without first changing it.

Do it right the first time, because it’s quite likely that you won’t have time to or you won’t remember to go back and do it right later.

Evenly distributing items in a list

Imagine that you have many different types of liquids, and you want to mix them in some given ratios. As a simple example, say you have liquid types A, B, and C, and you want to mix them in the ratio of 30 A, 20 B, and 10 C. If you wanted to mix those in a canning jar, you’d probably pour 30 units of A into the jar, then 20 units of B, and then add 10 units of C. If you did that, you could very well find that the mixture is stratified: that the bottom of the jar contains a layer of A, followed by an area where A and B are mixed, a thinner layer of B, an area where B and C are mixed, and then a thin layer of C. If you want things to mix, you have to stir, swirl, shake, or otherwise mix the items.

If you have no way to mix the items after they’ve been added to the container, then you have to change your addition process so that you minimize stratification. One way to do that is to use smaller additions. Rather than pouring all 30 units of A into the jar, followed by all the B and all the C, do it in one-unit increments. So, for example, you’d make one-unit additions in the order [A,B,C,A,B,A] 10 times.

Figuring that out for a mixture of three liquids is pretty easy. Let’s describe that mixture as A(3),B(2),C(1). Now imagine you want to mix seven liquids with these proportions: A(50),B(25),C(12),D(6),E(3),F(2),G(2).

Yeah. It gets messy quick. At first I thought I could create an array of 100 buckets and then, starting with the first item, put an A into every other one. Then but a B into every fourth one. But I ran into a problem real quick because “every fourth” has already been filled with an A. So then I figured I could just put the “every fourth” into positions 5, 9, 13, 17, 21, etc. But then I came to C and I have to put a C into every 8-1/3 item . . .

I stopped at that point because I couldn’t come up with a reasonable set of rules for the general case of mixing three liquids. And without a clear set of rules, I wasn’t even going to attempt writing code. I went to bed the other night frustrated with my inability to make progress.

I don’t even try to understand how my brain works. At some point just before or after I fell asleep, I envisioned three different streams of liquid. One was pouring out of a nozzle that was delivering three gallons per minute, one delivering two gallons per minute, and the last delivering one gallon per minute. The three streams of liquid were merging!

It was an “Aha!” moment. I actually sat straight up in bed. I know how to merge streams. I just need a way to make those proportions look like streams.

I’ve shown a tabular representation of the three-liquid mixture below. The value in the last column, Frequency, means “one every N”. So the value 6 means “one out of every six.”

Liquid Count Frequency
A 3 2
B 2 3
C 1 6

So A would be in positions 2, 4, and 6. B would be in positions 3 and 6. And C would go into position 6. Forget for the moment that position 6 is occupied by three different liquids. Those are the positions we want the liquids to take. Positions 1 and 5 aren’t occupied, but they’ll obviously have to be filled.

If you remember my merging discussions (see article linked above), we built a priority queue (a min-heap) with one item from each stream. Let’s do that with our three items, using a Priority field for ordering. That field is initially set to the Frequency value. So the heap initially contains: [(A,2),(B,3),(C,6)].

Now, remove the first item from the heap and output it. Then add the frequency value to the priority and add the item back to the heap. So after the first item (A) is output, the heap contains: [(B,3),(A,4),(C,6)].

Again, remove the first item, output it, update its priority, and add it back to the heap. The result is [(A,4),(C,6),(B,6)].

If you continue in that fashion, the first six items output are A,B,A,C,B,A.

Given those rules, it was just a few minutes’ work to develop a method that, given an array of counts, returns a sequence that will ensure a reasonable mixture. The OrderItems method below is the result. It uses my DHeap class, which you can download from this blog entry.

    private IEnumerable<int> OrderItems(int[] counts)
        var totalItems = counts.Sum();

        // Create a heap of work items
        var workItems = counts
            .Select((count, i) => new HeapItem(i, count, totalItems));
        var workHeap = new MinDHeap<HeapItem>(2, workItems);

        while (workHeap.Count > 0)
            var item = workHeap.RemoveRoot();
            yield return item.ItemIndex;
            if (--item.Count == 0)
                // all done with this item
            // adjust the priority and put it back on the heap
            item.Priority += item.Frequency;

    private class HeapItem : IComparable<HeapItem>
        public int ItemIndex { get; private set; }
        public int Count { get; set; }
        public double Frequency { get; private set; }
        public double Priority { get; set; }

        public HeapItem(int itemIndex, int count, int totalItems)
            ItemIndex = itemIndex;
            Count = count;
            Frequency = (double)totalItems / Count;
            Priority = Frequency;

        public int CompareTo(HeapItem other)
            if (other == null) return 1;
            var rslt = Priority.CompareTo(other.Priority);
            return rslt;

The counts parameter is an array of integers that define the count of each item type to be delivered. In the case of our simple example, the array would contain [3,2,1]. The values in the returned sequence are indexes into that array. The returned sequence for this example would be [0,1,0,2,1,0].

You’ll need to do some translation, then: first to create the counts array, and then to get the actual items from the indexes returned. Here’s an example.

    var mixLiquids = new char[] {'A', 'B', 'C'};
    var mixCounts = new int[] {3, 2, 1};

    foreach (var index in OrderItems(mixCounts))
        Console.Write(mixLiquids[index] + ",");

The output from that will be A,B,A,C,B,A.

The OrderItems method produces a good mixture of items in that it spreads out the liquid additions to minimize stratification, but it might not produce a uniform mixture when some liquids are used much less than others. In my second example, where A, B, and C together make up 87% of the total, liquids D, E, F, and G might not get mixed thoroughly. If I run the code above with my second example, the first additions of F and G don’t occur until the middle of the sequence: around index 45. The result would be that those liquids might not intermix with the first half.

It might be better to push the low-frequency additions towards the front so that they’re mixed well with all the other additions. To do that, set the initial Priority of all items to 0, and make sure that the comparison function (HeapItem.CompareTo) favors the low frequency item if the priorities are equal. In the HeapItem constructor, change the Priority initialization to:

    Priority = 0;

And replace the CompareTo method with this:

public int CompareTo(HeapItem other)
        if (other == null) return 1;
        var rslt = Priority.CompareTo(other.Priority);
        // If priorities are the same, then we want the lowest frequency item.
        return rslt != 0
            ? rslt
            : other.Frequency.CompareTo(Frequency);

With those modifications, the lower-frequency additions are done up front, giving us [C,B,A,A,B,A] for the first example. In the second example, E and F would be added first, and also at index 50 or so.

Or, you might want to put those liquid additions around indexes 25 and 75. To do that, change the Priority initialization to Priority = Frequency/2;.

Whereas I solved this problem specifically for ordering the additions of liquids in a mixture, the OrderItems method is more generally useful. If you have a bunch of different items that you want to spread out evenly, this will do it for you. It’s a simple solution, and the algorithmic complexity is the same as with merging streams: O(n log2 k), where n is the total number of items and k is the number of different item types.

Synthetic biology gone wrong

March 4, 2137

Fire Breathing Hamster Destroys Lab

A five alarm fire at the headquarters of Synthetic biology startup Custom Creature Creations caused one hundred million dollars’ damage and destroyed the entire facility. Five fire fighters were treated at a local hospital for smoke inhalation after helping to extinguish the blaze. No other injuries were reported.

According to Dr. Stanley McEldridge, president and founder of CCC, the company’s technicians were attempting to create a small fire breathing dragon to demonstrate the company’s capabilities at an upcoming trade show. All appeared to be going well. But when technicians arrived this morning they found a hamster in the synthesis tank where they had expected a dragon. They immediately assumed the worst: that a competitor had broken into the facility overnight, stolen the dragon, and replaced it with a hamster.

After notifying security of the break-in, they removed the hamster from the synthesis tank and placed it in a cage located in another part of the building. About an hour later, one of the lab technicians opened the cage to remove the hamster, and received the shock of his life. The hamster, startled by the technician’s hand reaching in to grab him, backed up and, according to the technician, hissed.

“When the hamster hissed, fire came from its mouth and singed my hand.”

Then the hamster’s whiskers caught on fire, followed quickly by the poor creature’s fur. Screaming and still belching fire, the hamster jumped out of his cage and knocked over a large container of ethanol nearby. The flaming hamster ignited the ethanol, which spread across the room.

Investigators are still trying to determine how the fire spread from there, although they do point out that pure oxygen was piped to the room and that if one or more of the valves was leaking, it could have turned what should have been a minor fire into a conflagration.

The real question is how the lab managed to create a fire breathing hamster. Dr. McEldridge and his staff at Custom Creature Creations would not respond to questions, saying that they are reviewing their procedures and will issue a report when their investigation is complete.

Dr. Alfred Swain, a leading opponent of synthetic biology technology, speculates that the cause was faulty sanitation procedures.

“You have to be very careful with this stuff. Any bit of contamination can lead to unexpected and potentially disastrous results. If one of the technicians working on the project was handling his family’s pet hamster before coming to work, and failed to follow the protocols before entering the clean room, you could very well end up with hamster DNA contaminating the experiment. This is just one example of the things that can happen when we try to create new life forms. I have warned about this kind of thing in the past, and I will continue to lobby for the abolition of all synthetic biology experiments.”

Splitting a log

tree1Back in the summer of 2010, this oak tree developed some kind of disease and we had to have it taken down. It probably would have lived a few more years, but it was starting to rot at the base and it was close enough to the house that if a good stiff wind came along it would end up crashing into the house and ruining the new roof and siding. It’s kind of too bad we had to remove it; the tree provided a lot of shade on the south side of the house.

As an aside, I had a heck of a time finding the pictures of this event. For some reason I thought that we took the tree down in 2009, and I thought I’d blogged about it. But I couldn’t find it in the blog, and I couldn’t find the pictures where I thought they should be. I finally decided to check the year 2010. Hey, at least I got the month right.

I call this the Facebook problem. With Facebook, I’m much more likely to post a picture or two and a few paragraphs. Writing a blog entry is more work and doesn’t have the instant gratification of people pressing “Like” or leaving a quick comment. It’s way too easy to make a quick Facebook post and move on. I had to search sequentially through the history to find the old post. Then I discovered how to search my Activity Log . . .


Anyway, back to the tree. What was left after they topped it is shown on the left: a 12-foot-tall trunk about two feet in diameter and a fork at the top. That ended up in three pieces, the largest being the bottom seven feet. I spent a couple of days cutting up the larger limbs and putting them in the firewood pile, and grinding up the smaller stuff for mulch. The larger limbs and the two smaller (if you call two and a half feet tall and two feet thick “small”) trunk pieces got stacked around a nearby mesquite tree so I could split them after they dried.

Debra and I, with the help of the lawn tractor, rolled the large trunk out of the way under some other trees. The idea was to let it dry for a few years and then carve it into something. I didn’t know what, but I wanted to try my hand at chainsaw carving.

But the log started to crack quite a bit and I didn’t really know how to prevent or even slow the cracking. So I left the trunk there under the other trees, figuring I’d cut it up for firewood (or BBQ wood) one of these days.

I did make an end table from the top trunk piece. That’s another example of the Facebook problem. I’ll have to post about that here another time.

A couple of weeks ago I got the crazy idea of trying to get usable carving or possibly building wood from that trunk. It’d be kind of cool to mill lumber from that tree and build a table or a small hutch or something. And seeing as how my little electric chainsaw would have some serious trouble getting through that trunk, I decided I’d try to split the log and see if I could get any usable lumber out of it. And, because I’m curious, I thought I’d see if I could split it without using power tools.

I started by driving my steel splitting wedge into the end of the log with a little four pound sledge. That worked well enough: a split formed at the top of the log and there and there were satisfying crackling sounds coming from the log as the fibers split. But then my wedge was stuck.


I tried making wedges out of oak branches and some scrap 2×4 lumber, but they disintegrated in short order when I tried to drive them into the crack.

A friend who was building a deck a few years ago gave me a bunch of cutoffs: 2×4 and 4×4 pieces that were six to eight inches long. The wood was Ipê: a very hard wood from South America that is commonly used for building decks. I carved a few birds from it, but the rest has been sitting in my shop waiting for me to come up with uses for it. It’s an okay carving wood. It makes excellent splitting wedges, though. A few cuts on the bandsaw and I was back in business.



Then it was a matter of driving a wedge into the log, moving a few inches, driving another wedge, etc. I had enough wedges that by the time I ran out the log had split enough that I could re-use those from the back of the line. I did have to make another trip to the bandsaw for more, though: even the Ipê isn’t indestructible. Between me whacking it with the hammer and the oak resisting splitting, those wedges were only good for two or three uses. I suspect they would have lasted longer if I’d been using a wooden hammer. I might try that if I ever split a log like this again.

When I got to the end of the log it was split most of the way through all along its length, but I didn’t have long enough wedges to complete the job. Debra hurt her finger (nearly crushed it) helping me roll the log over and it was almost dark anyway, so I reluctantly put up my tools (except for the steel wedge that was still stuck in the other end) and called it a night.



And that’s how I left it for a week. This evening I cut eight more wedges and used a steel bar as a lever to roll the log over. It didn’t take but about 15 minutes to finish the job of splitting the log into two pieces.

splitThat’s a very strange perspective. Those two pieces really are the same length. The foreground piece is not as wide as the one in back (the log didn’t split exactly evenly down the middle), but they’re absolutely the same length. The picture makes it look like the foreground piece is longer.

You can also see the remains of the Ipê wedges there on the foreground piece. The rest of them are in splinters.

Both of those pieces have large cracks along which I’ll split off pieces, again by hand. I should end up with a about eight 7-foot pieces of wood of varying thickness. I’m hoping that I can get at least one piece that’s four inches square. I know that I will get several pieces that will allow me to cut 2×4 boards, and possibly even some 2×6 pieces. And of course I’ll get lots of stuff that’s one inch or less in thickness.

Once I get the log split into roughly the sized pieces I want, I’ll take them to TechShop and spend some time with the jointer and planer to make lumber. Unless, of course, the log is too far gone. Then I’ll just cut it up and use it for the smoker.

I learned quite a bit in splitting this log. If I had to do another, I could probably do it in half the time. It was pretty interesting going through the learning process, and I have a new appreciation for how people did things before they had the benefit of sawmills that produce nice straight lumber as if by magic. Making your own boards is work.


Making boards

Debra surprised me at Christmas with a one-year membership to TechShop, and a gift certificate for five classes. I’ve been wanting to get involved with TechShop for a couple of years, but there were always other priorities.

Since I got into wood carving, I’ve been slowly making my into wood working as well, with the oval table being the most recent of my experiments. I’ve long wanted to make cutting boards and similar things, but haven’t really had the tools necessary to do a good job. TechShop, though, has a full wood shop with table saw, large band saw, router table, jointer, planer, thickness sander, etc. I just had to take a couple of classes on Safety and Basic Use (SBU).

Today I took a couple chunks of mesquite–cutoffs from a tree I had to take down last spring–to the shop and rough cut them into lumber. The logs were about eight inches thick, which is two inches larger than what will fit in my band saw. The first thing I did was cut four slabs off one end. I’m planning to turn these into little cheese boards, hopefully keeping the bark edge.


Three of those are 3/4 inch thick. The other is 1/2 inch thick. The dark splotches are actually from moisture. I was surprised at how wet that log was, even after spending the last eight or nine months in my garage. I know that it takes time for wood to dry, but this wood was wet on the inside. Way more moisture than I had expected after that time.

After cutting those slabs, the remaining log is about 14 inches long. The other log, shown here before cutting, was right at 18 inches.


I didn’t take any progress pictures showing how I set up to cut boards from the logs. Briefly:

For cutting the cheese boards, I screwed a scrap piece of 2×6 lumber to the log so that there was a flat and stable base for it to rest on. I took a thin slice to square up the end, and then set the band saw fence to 3/4 inch and cut the three cheese boards. I had planned to cut four that thick, but I goofed when I screwed the 2×6 onto the log; I didn’t leave enough log hanging out. So I had to settle for 1/2 inch on the last one. I could have just sawed through the 2×6 or taken the time to adjust the base. I decided to see if 1/2 inch will be thick enough.

For cutting the boards, I set the scrap 2×6 firmly on the table beside the log, and carefully screwed them together. Doing that provides a steady base so that the log can’t roll to the side when I’m pushing it through the saw. I made one cut of about 3/4 inch to get a good flat side on the log. I then took it over to the jointer and made that perfectly flat.

The picture linked below is one I took a few years back, showing how the board attached to the log works.

Then back to the band saw with the flat base on the table, I took 3/4 inch off one of the sides, took the log back to the jointer, and squared that side up so that I had two perfectly flat sides that were at an angle of exactly 90 degrees with each other.

Back to the band saw, I set the fence one inch away from the blade and with one flat side on the table and the other flat side on the fence, I cut the boards.

I’ve cut lumber on my band saw at home without using a jointer to square up the sides. It works okay, but the boards don’t come out near as close to square as they did today.

So now I have a bunch of rough cut mesquite boards, all one inch thick and with varying widths and lengths. I’ve stacked them in my garage, separated by some scrap wood so that air can circulate, and will let them dry for six or eight months. I figure next fall I’ll be able to make some cutting boards. Although I might get impatient and cut up some of the other wood I have here that’s already dry. Unfortunately, I don’t think I have enough dry mesquite to make a cutting board. I have plenty of other types, though.

The cheese boards won’t take nearly as long to dry. I’ve put them aside, as well, but I expect them to be dry enough for working in less than two months. Possibly even sooner than that. Wood loses its moisture very quickly through the ends, so those 3/4 inch pieces should dry fast. I’ve also considered experimenting with using the oven as a kiln to speed the drying process. I might sacrifice one of the slabs and one of the boards to an experiment . . .



I made a few thinner cuts, as experiments. One of the pieces is a little less than 1/16 inch thick. I’m sure that with a little practice I could reliably cut 1/16 inch veneer, and quite possibly 1/32 inch. That opens up some interesting possibilities.


All told, I had a great time playing in the wood shop today. Now I just have to be patient until the wood drys so I can use it.


Building an oval table

After having so much fun working with the folks at Sam Bass Community Theatre, I volunteered to help out with their next show: a production of James Lapine’s Table Settings. Rather than acting this time, I’ll be running the lights and sound, and I’m also helping out with set construction.

The primary set piece is a table, and the director wanted something specific: a 4 foot by 8 foot oval table covered with a tablecloth and strong enough that a 200 pound man can stand on it. Feeling adventurous, I volunteered to build the table.

Understand, I’d never really built anything before. Oh, sure, I’ve assembled Ikea furniture, knocked together a few rickety work benches and some barely functional garage shelves, and even trimmed a door or three, but that’s a far cry from creating a large table starting with a plan and a bunch of lumber. But what the heck: you learn by doing, right?

It was cold (35 degrees) this weekend and there’s no heat in my garage, so I elected to construct the table in our master bedroom, which is currently under renovation. That is, it’s torn apart and we haven’t started putting it back together. That’s my next project. I picked up the required materials at Home Depot on Friday evening and Debra helped me carry it through the house to the bedroom. The only thing I really needed help with was a 4×8 sheet of 3/4 inch plywood. The rest of the lumber was a bunch of 2×4’s and one wooden dowel.

I chose to get plain old plywood rather than cabinet grade. No use spending the extra money when it’s going to be covered with a tablecloth. And the tablecloth (somebody else is making that) will reach all the way to the ground, so I didn’t have to spend any effort making the legs look good.

There’s nothing particularly difficult about cutting an oval. I remembered learning how to draw one in geometry class nearly 40 years ago, but I didn’t remember the specifics. YouTube to the rescue. There are about a zillion videos showing how to draw an ellipse using nothing more than a few pins or nails, some string, and pencil. Here’s one that I found to be particularly clear and easy to follow.

It took a couple of tries to get it right because the knot in my string kept slipping. But I managed to get a reasonably accurate ellipse on the plywood. Then it was time to break out the jigsaw.

table2The bite there on the left corner was a test cut. I’ve used a jigsaw maybe twice in my life before this project, and I wanted to make sure I could follow a line. You can see that I goofed on entry to ellipse line (overshot it). I knew that I wouldn’t cut it perfect, and I had already planned to take a belt sander to the edge once I was done with the rough cutout. I just had to do a little more smoothing than I’d originally planned.

Making a smooth cut with the jigsaw requires a fine blade, and patience. Take it slow. Don’t force the saw through the wood. Rather, just guide the saw. Let the blade do the cutting. Also, don’t try to go all the way around in a single cut. Take off smaller segments. Otherwise you risk having the plywood break off and ruin your pretty shape.

Even taking a few breaks for pictures and to stretch out my back (leaning over to guide that saw is uncomfortable), it took less than 30 minutes to complete the cutout.


The completed cutout is 91 inches long and 46 inches wide. Not bad starting from 96×48, although I can’t give a good reason why I didn’t get 94 inches. Oh, well. It’s close enough.

With the top cut out, it was time for the hard part: constructing the base. I chose to modify the base for this simple table because … well, it’s simple. The base is functional, sturdy, and looks easy enough to build with simple tools.

The only modification I made was to the dimensions. My base is 49 inches long and 32 inches wide. That leaves almost two feet of table hanging off each end, but it’s still plenty sturdy. I wouldn’t recommend trying to sit or stand on one of the ends, though. I was a little worried that the center span would be too large and would sag under the weight of 200 pounds standing on it, but The Sagulator says that it’s acceptable.

I won’t detail construction of the base. I followed the directions in the linked article and everything worked out just fine. It just took a long time because I was checking everything multiple times to be sure I wasn’t making a mistake. When I got it all put together, I was a little surprised that the base was level with no wobbles. I guess all that double- and triple-checking paid off.


Attaching the top turned out to be a chore. For some reason I couldn’t get the screws to hold in one corner of the plywood. I futzed with the thing for a while and finally got it to work. I still don’t know what the problem was. I suspect that there was a soft spot in the plywood that kept the screws from biting. Moving the screws a few inches solved the problem. And, as you can see, the table passed the fat guy standing test. I’m smart enough not to try the fat guy bouncing up and down test.


The last thing I did was sand the top to remove any splinters and the manufacturer’s printing (including that silly notice telling me that plywood contains chemicals that the State of California has determined to cause cancer; is there any product in existence that doesn’t have one of those warnings on it?), and run a router around the edge. I’ve always disliked how a tablecloth looks hanging over a hard edge. A nice rounded edge makes the cloth drape a lot nicer. Here you can see the difference between the straight edge and the rounded edge.

routerThe completed table should work well for the play, and if they don’t want to keep it afterwards I’ll probably take the base back and attach a rectangular top to use as a workbench. Not sure what I’ll do with the elliptical top.

finishedThis was a fun project. Better, I was able to complete it with tools I already had. As the author of the Simple Table article points out, this project can be completed with a minimum of tools. The only tools I added were the jigsaw, belt sander, and router, and those were for constructing the top. I did use my compound miter saw to cut the legs because my electric circle saw grew legs a few years ago and the battery powered saw couldn’t make it through more than two cuts before crapping out on me. I even had to cut one of the rabbets with a chisel because the battery died and I didn’t want to wait for it to recharge.

If you ever thought of making your own work table, you should give that Simple Table a try. It’s not hard to build, and it’s not like you’d be out a huge investment if you screw it up. 2×4’s are three or four dollars each. For me, it was a great first project and now I’m looking forward to building other things.



Short term thinking

The price of gas has dropped about $1.50 per gallon in the past couple of months. The other day I paid $1.85 per gallon for regular unleaded. Inflation adjusted, that would be like paying $1.35 in the year 2000. Not an historic low (I paid 95 cents per gallon back in November 2001), but it’s down almost 50% since June.

With that reduction in gas prices, people are already thinking about how to spend their savings. Car dealerships are reporting a large jump in sales recently, and buyers are citing the low price of gas as one reason for their purchases. And it’s not the economy cars that are selling. No, people are buying big ol’ gas guzzlers, conveniently ignoring that the price of gas is volatile and quite likely to climb back to $4.00 per gallon as quickly as it dropped. It might be a year or more before the price goes up, but it will go up. I will have no sympathy for those who, two years from now, complain that they can’t afford the payments on their SUVs or to buy gas to drive the silly things.

Not that I expect people to do anything other than what they’re doing. It seems most people will spend just a little more than they can afford on a car, regardless of what they really need in a car. Why spend only $15,000 on basic transportation when you can spend $30,000 on a cool new whiz bang monster SUV with all the bells and whistles? After all, the finance company wouldn’t let me borrow more than I can afford. Right?

Politicians, too, aren’t afraid to say and do stupid things in response to this latest drop in the price of gas. Democrats and Republicans alike are making noises about instituting a “carbon tax” on gasoline. To the tune of 40 cents per gallon! The argument is that gasoline is under priced, with the price not reflecting the full cost of the product. That is, the damage done to the environment by burning the fuel. One is supposed to believe, I guess, that if such a tax were instituted, the revenue would go towards some method of combating climate change.

The truth is somewhat different. Republicans are looking at an increased gas tax as–wait for it–a means of reducing income taxes. This is one of the best examples of double think that I’ve seen in a long time. Conservatives who historically look at any new tax or reduction in tax deductions are seriously saying that taxing consumption rather than income is a solid “conservative” principle that they’ve been advocating forever.

Now I’m not saying that Democrats would necessarily use that additional revenue to combat climate change. No, they’d be more likely to put forth bills that fund all manner of additional social programs, few of which have any chance of doing anything but making people think Congress is Doing Something About The Problem, and most of which no different than programs that have failed in the past.

It’s all a bunch of short-term thinking. Knowing how Congress works, they would project revenue based on consumption of gas at the current price, without taking into account that consumption decreases as price increases. Adding 40 cents per gallon will immediately reduce consumption, and the inevitable price increase in the next few years will reduce it even more. Any proposed legislation to squander the ill-gotten gains would be dependent on the projected tax revenue, and when that revenue decreases those programs would be under-funded.

What Congress should do is … nothing: let us consumers enjoy this temporary respite from the high price of gas. Let suppliers sort things out, and when demand increases or the Saudis decide they need more money, the price will start going up again. But Congress is money junkie with all the self control of a drug addict. The primary difference being that we prosecute drug addicts but we condone and even encourage Congress’s addiction even though they do way more harm than good.

Environmental groups should concentrate on encouraging more sustainable energy supplies, and ignore the temporary increase in fossil fuel usage. The sooner we burn all of the readily available fossil fuels, the sooner their alternative energy sources will be in demand. If the environmentalists’ projections are right in regards to climate change, a few years’ increased consumption isn’t going to make much of a difference anyway. They might as well spend their limited resources (time and money) on developing alternatives rather than on fighting a losing battle against consumption.


A sample text widget

Etiam pulvinar consectetur dolor sed malesuada. Ut convallis euismod dolor nec pretium. Nunc ut tristique massa.

Nam sodales mi vitae dolor ullamcorper et vulputate enim accumsan. Morbi orci magna, tincidunt vitae molestie nec, molestie at mi. Nulla nulla lorem, suscipit in posuere in, interdum non magna.