You must have this whether or not you need it

We converted all of our projects to Visual Studio 2010 a couple of weeks ago, as a first step towards upgrading the production machines to .NET 4.0.  I was very pleased at how painless the conversion from Visual Studio 2008 went.  There were a few glitches, but in just a day we had all 100+ projects converted and nothing went wrong.

Since then, I’ve been testing the code with .NET 4.0, making sure that programs still talk to each other, that we can still read our custom data format, and that the programs actually work as expected.  Call me paranoid, but when the business depends on the software, I try to make absolutely sure that everything’s working before I throw the switch on a major platform change.

All the tests have been successful.  I’ve not found anything that breaks under .NET 4.0.  I hadn’t expected anything, but I had to test anyway.  So this morning my job was to merge my project changes into our source repository and ensure that the automated build works.

The merge was no problem.  The automated build failed.  It failed because I didn’t check in some files.  Which files?  I’m so glad you asked.

A convention in .NET is that you can store application-specific configuration information in what’s called an application configuration file that you ship along with the executable.  The  file, if it exists, is usually named <application name>.config, so if your executable program is hello.exe, then the configuration would be called hello.exe.config.  As a convention, it works quite well.

The major point here is that the application configuration file is optional.  If you don’t want to store any information there, then you don’t need the file.

In Visual Studio, the tool lets you create a file called App.config to hold the configuration information.  When you build the program, Visual Studio copies App.config to the build output directory and renames it.  Continuing the above example, App.config would become hello.exe.config.

So far so good.  All that still works as expected in Visual Studio 2010.  Until, that is, you change the program’s target framework in the build options from .NET 3.5 to .NET 4.0.  Then, Visual Studio becomes helpful.  It automatically creates an App.config file for you, places it in your source directory, and updates your project settings to reflect the new file.  So helpful.

And so broken!

It’s broken because when I subsequently check in the modified project (which I thought I just changed to target .NET 4.0), the project also contains a new and unknown to me reference to the App.config file, which I did not check in.  So when my automated build script checks out the code and tries to build the solution, every executable project failed with a missing file.

That Visual Studio added a file and didn’t tell me is merely annoying.  What really chaps my hide, though, is that the file it added is extraneous.  It contains no required information.  Here is the entire contents of the file that Visual Studio so helpfully added:

<?xml version="1.0"?>
<configuration>
  <startup>
    <supportedRuntime version="v4.0" sku=".NETFramework,Version=v4.0"/>
  </startup>
</configuration>

My options at this point are: 1) Go through the compiler errors and check in every App.config file that was automatically created; 2) Edit the projects to remove the file references.

I’d like to do #2, but if I do I’ll just run into this same problem again the next time I upgrade Visual Studio.

The final result is that it took me an extra hour to get my build to work, and now every executable program that I deploy has an extraneous file tagging along as useless baggage.  It’s extra work to eliminate those files in the deployment process, and in doing so I might eliminate one that I really need.  In the past, only programs that really needed the configuration files had them, and I knew that the existence of the file meant that there were settings I could modify.  Now that every program has a configuration file, I’ll be forever checking to see if there are settings that change the application’s behavior.

So, to sum up.  Visual Studio adds an unnecessary file, modifies the project, doesn’t notify me of the added file, fails to build if the file isn’t checked in, forces me to deploy that unnecessary file or somehow modify my build process to delete it, and the resulting file will lead to uncertainty on the part of the user.  Wow.  Who could have thought that a single simple feature would cause so much unnecessary work and annoyance?

Bikes, shoes, and armadillos

Armadillos are an occasional feature on the local roadways. Dead armadillos, unfortunately. In 15 years of living in the Austin area I’ve seen three live armadillos. The rest have been dead–splattered on the highway like a ‘possum on a half shell. But I’ve seen more than usual on my bike rides the last few weeks. I’m thinking that the rain we got from Hermine (16 inches at our place) drove the creatures out of their normal hiding places and they’ve been on the roads in unusual numbers.

But I can’t explain the number of shoes I’ve seen on the road lately. I’ve occasionally seen a shoe on the side of the road in the past, but last week I saw at least one shoe on every bike ride. I’m not talking ratty old beat up shoes, either, but relatively new and expensive looking running shoes. The majority were on the 1.25 mile stretch between the office and the data center. I ride that nearly every day to retrieve the backup drive. Most days, I saw a different shoe lying in the road. My only explanation is that the road is used by high school students, and somebody (or several somebodies) thought it’d be funny to throw his friend’s shoe out the window.

The other road on which I encountered shoes is a major surface street, and I suppose also used by high school students on their way to and from school. Whatever the case, if you figure $100 for a good pair of shoes, last week I saw $400 or $500 worth of shoes on the ground. Always just one shoe at a time, though, and I don’t think any two together would have made a matched pair.

Yeah, one gets an entirely new perspective of the road when riding on the shoulder at 15 or 20 MPH.

I picked up an old rusty metal file from the shoulder of the road the other day. I don’t particularly need a rusty file, but I’ve been told that they contain very good steel. A number of wood carvers I know make custom knives from discarded files. I thought I’d give it a try.

I’ve signed up to ride the Outlaw Trail 100 on October 9. Debra and I did this ride in 2005–her first century. I’ve continued my training since the Hotter ‘N Hell ride, and expect to do a bit better here in Round Rock.

Unwise conventional wisdom, Part 1: Locks are slow

In two different discussions recently I had somebody tell me, “locks are slow.” One person’s comment was “Locks should be avoided whenever possible. They’re slow.” This is a bit of conventional wisdom that’s been around for decades and seems to be getting more prevalent now that more programmers are finding themselves working with multithreaded programs.

And, like all too much conventional wisdom, it’s just flat wrong.

lock is a cooperative synchronization primitive that is used in computer programs to provide mutually exclusive access to a shared resource. Yeah, that’s a mouthful. Let me put it into simpler terms.

When I was in Boy Scouts, we’d sit around the campfire at night and tell stories. The person who held the “speaking stick” was the only one allowed to talk. My holding the stick didn’t actually prevent anybody else from talking. Rather, we all agreed on the convention: he who holds the stick talks. Everybody else listens and waits his turn. The stick was a cooperative mutual exclusion device.

Programming locks work the same way. All threads of execution agree that they will abide by the rules and wait their turn to get the stick before accessing whatever resource is being protected. This is very important because many things in the computer do not react well if two different processes try to access them at the same time. Let me give you another real-world example.

Imagine that you and a co-worker both need to update an Excel spreadsheet that’s stored in a shared location. You take a copy of the file and start making your changes. Your co-worker does the same thing. You complete your changes and copy the new file over to the shared location. Ten minutes later, your co-worker copies his changed version of the document, overwriting the changes that you just made. The same kind of thing can happen inside a computer program, but it happens billions of times faster.

And so we use locks and other synchronization primitives to prevent such things from happening. Locks are common because they’re very simple to understand, easy to use, and quite effective. There are potential problems with locks, but any tool can be misused.

So let’s get back to the conventional wisdom. Are locks really slow? Rather than guess, let’s see how long it takes to acquire a lock. The first bit of code executes a loop, incrementing a variable one million times. The second code snippet does the same thing, but acquires a lock each time before incrementing the variable and then releases the lock afterwards. The code samples are in C#.

int trash = 0;
for (int i = 0; i < 1000000; ++i)
{
    ++trash;
}

// Using a lock
int trash = 0;
object lockobj = new object();
for (int i = 0; i < 1000000; ++i)
{
    lock (lockobj)
    {
        ++trash;
    }
}

Timing those two bits of code reveals that the second takes almost one-tenth of a second longer than the first. So it takes approximately 0.10 seconds to acquire and release a lock one million times. That’s about 10 million locks per second, or 0.10 microseconds (100 nanoseconds) per lock. (It’s actually closer to 80 nanoseconds, but 100 is a nice round number.) I know, I can hear the performance-sensitive screaming already, “Oh my bleeding eyeballs! 100 nanoseconds! That’s like 400 clock cycles! That’s an eternity to my super fast machine!

And he’s right. To a computer running at 4 GHz, 400 nanoseconds is a pretty long time to spend doing nothing. But locks don’t execute in isolation. They’re there to protect a resource, and accessing or updating that resource takes time–usually a whole lot longer than it takes to acquire the lock. For example, let’s say we have this method that takes, on average, about one microsecond (that’s 1,000 nanoseconds) to execute when there is no lock.

void UpdateMyDataStructure()
{
    // do cool stuff here that takes a microsecond
}

Then we add a lock so only one thread at a time can be doing that cool thing.

static object lockobj = new object();
void UpdateMyDataStructure()
{
    lock (lockobj)
    {
        // do cool stuff here that takes a microsecond
    }
}

It still takes 100 nanoseconds to acquire the lock when it’s not contended (that is, when no other thread already has the lock), but doing so only adds 10 percent to the total runtime of the method. I know, I know, more bleeding eyeballs. Programmers lose sleep over 10 percent losses in performance. Dedicated optimizers will go to great lengths to save even five percent, and here I’m talking about 10 percent like it’s nothing. Let’s talk about that a bit because there are several issues to consider.

There’s no doubt that the lock is adding 10 percent to the method’s total runtime. But does it really matter? If your program calls that method once per second, then in a million seconds (about 12 days) acquiring the lock will have cost an extra second in run time. The 10 percent performance penalty in that one method is irrelevant compared to the total runtime of the program.

We also can’t forget that there are multiple threads calling that same method. Sometimes one thread will already be holding the lock when another thread tries to acquire it. In that case, the thread coming in will have to wait up to one microsecond before it can acquire the lock, meaning that executing the method can take a whopping 2,100 nanoseconds! (1,000 nanoseconds for the first thread to complete, 100 nanoseconds to clear the lock, and another 1,000 nanoseconds to do its own cool stuff.) By now my friend’s eyeballs have exploded.

And things only get worse as you add more threads and call the method more often. But again, does it matter? At an average of 1,100 nanoseconds per call, that method can be called more than 900,000 times per second. Without the lock, you can call that method a million times per second. It seems to me that if 90% of your time is spent on one method, you have a much bigger problem than the lock. Your entire program is dependent on the performance of this one method. That’s true whether or not you have multiple threads accessing it.

And don’t forget the most important point: the lock or something like it is required. You’re protecting a resource that you’ve determined will not support simultaneous access by multiple threads. If you remove the lock, the program breaks.

The conventional wisdom that locks are slow is based on two things. From the optimizer’s point of view, a lock is slow because it adds to the amount of time required to execute a particular bit of code, and doesn’t provide any immediately recognizable benefit. It’s just extra machine cycles. The optimizer doesn’t care that the time required for the lock is a miniscule portion of the total program runtime.

The other reason locks are considered slow is because an application can become “gated” by a lock. That is, all of the threads in the application spend an inordinate amount of time doing nothing while waiting to acquire a lock on a critical resource. Therefore, concludes the programmer who’s profiling the application, the lock must be slow. It doesn’t occur to him that the lock isn’t the problem. Any other means of synchronizing access to the critical resource would exhibit similar problems. The problem is designing the program to require mutually exclusive access to a shared resource in a performance-critical part of the code.

That may seem like a fine distinction to some, but there is an important difference. It’s one thing to complain if I were to install an 800 pound steel door on your linen closet because I felt like it, and something else entirely to complain if I did it because you told me to. If you design something that has to use a lock, then don’t get upset at the lock when it turns out to be inappropriate for the task at hand.

There are many alternatives to locks. There have been some important advances recently in lock-free concurrent data structures like linked lists, queues, stacks, and dictionaries. The concurrent versions aren’t as fast as their exclusive access counterparts, but they’re faster to access than if you were to protect the non-concurrent version with some sort of lock. The other primary alternative is to redesign the program to remove the requirement for exclusive access. How that’s done is highly application dependent, but often requires a combination of buffering and favoring infrequent long delays over frequent short delays.

To recap: locks are not slow. Used correctly, a lock provides safe, effective, and efficient mutually exclusive access to a shared resource. When it appears that a lock is slow, it’s because you have gated your application on access to that shared resource. If that happens, the problem is not with the lock, but with the design of the application or of the shared resource.

Spam problem found, but solution questionable

A few months ago I noticed an marked increase in the amount of spam that I was receiving.  At the time it was a minor inconvenience and I just dealt with the problem the old fashioned way:  I deleted the offending messages.  But a week or two ago Debra started noticing a large increase.  And then we were gone over the weekend and when I came back I had to trash over 200 messages.  Time to do something about the problem.

I get my email through my ISP, who has SpamAssassin installed.  I checked my settings again, just to be sure I had it configured correctly, and then sent a message to my ISP’s support through their exceedingly user-unfriendly help desk software.  After a short exchange of messages I got their answer:  1) lower the spam threshold in my SpamAssassin configuration to 3; 2) train SpamAssassin.

Fine.  Except.

1) According to the SpamAssassin configuration information, a setting of 5.0 is “fairly aggressive”, and should be sufficient for one or just a handful of users.  The instructions caution that a larger installation would want to increase that threshold.  It doesn’t say what lower numbers would do, but since several of the obviously spam messages I’ve examined have numbers over 2.0, I hesitate to reduce the setting to 3.  Otherwise I’ll start getting false positives.

2) Their method of training SpamAssassin involves me installing a Perl script (written by a user who has no official connection to the ISP and that is not officially supported), forwarding good messages to a private email address (that I control), and having the Perl script examine those messages so that it can update the tables that SpamAssassin uses.

That’s ridiculous!

First, there’s no explanation why my spam count went from almost zero to 50 or more per day almost overnight.  Second, they expect me to have the knowledge, time, and inclination to install and run that script.  Oh, and if I want to make sure that Debra’s mail is filtered correctly, I should have her forward her good emails to that private email address, too.  “I promise I won’t look at them.”  I wouldn’t, and it’s unlikely that there’s anything she’d want to hide from me anyway, but I can imagine that others who share my predicament would have users who are reluctant to forward their emails.

Honestly, it’s among the most ridiculous things I’ve ever heard.  Why don’t they have a reasonable Web client that has a “mark as spam” button?  Why, after 10 years of dealing with spam, is there no informal standard for notifying your ISP that a message you received in your email client is spam?  Why should I even have to think about spam anymore?  Shouldn’t the ISP’s frontline filters catch the obvious garbage that’s been clogging my mailbox?

I think I need a new ISP.  Or at least a better way to get my mail:  something that will filter the spam for me after downloading from my ISP.  But it has to be Web based.  I like using a Web mail client, because I regularly check my email from multiple locations.  Any suggestions on Web-based email services that can do this for me?

Hotter ‘N Hell Hundred – The Ride

The ride was scheduled to start at about 7:15, and this event was serious about being on time. Even at 6:30, there was a huge number of people heading towards the start. We took up at least six city blocks on a four-lane (maybe five-lane) road. The number of people there was just astounding. I’m disappointed that none of my crowd pictures turned out well. Standing there on the edge of the road, I could easily see 10,000 cyclists lined up to start the ride.

My friend and fellow Marine Military Academy alumnus Frank Colunga had come up for the ride from College Station. We started the ride together, but separated early on. I didn’t take any other pictures during the ride, but Marathon Foto was there, and I got a bunch of shots from them.  You can view them on my Facebook photo album.

After a few announcements and the traditional singing of the National Anthem–it was amazing how the chatter stopped with the first notes of the Anthem–a flight of four fighters (I think they were F-15s, but I could be mistaken) from Sheppard Air Force Base did a low fly-by, and as they passed the cannon fired to signal the start of the ride.

It’s hard to explain to somebody who hasn’t experienced it just what the start of a large bike ride is like. We’re all packed together far too tightly to just get on our bikes and start riding. They line us up by expected finishing time, with the faster riders in the front. Those of us further back end up walking a hundred yards or more, slowly pushing our bikes along until we get to the start line, where the road widens and the space in front of us opens up enough that we can get on the saddle and start pedaling. Then, we slowly increase speed, always being mindful of the people in front and the speed demons coming up to pass from behind. The key here is to keep your eyes on what’s happening ahead of you, keep riding in as straight a line as possible, and always look carefully behind before making any lateral changes. You have to pay attention to hand signals from riders you’re approaching, keep your ears open for shouts of “hole” or “bottle” or “glass” from riders warning of hazards ahead of you, and “on your left” from people coming up from behind. I’m surprised at how many riders were wearing ear buds and listening to music at this point. I can maybe understand listening to music once you get out on the road and away from the crowds, but at the start of a big ride like this, you really shouldn’t have anything interfering with your awareness of the situation.

I’ve done quite a few of these organized rides, and I’m fairly accustomed to the things that happen at the beginning. But it seemed to me that there were a whole lot more lost water bottles–especially at the beginning of the ride–than I’d ever seen previously. The first five miles was a veritable obstacle course of bottles rolling around on the road. It’s a good thing we had a four-lane road all to ourselves for the first 10 miles or so.

I felt good. The temperature at the start was about 70 degrees, and the forecast was for a high in the low 90s and a south wind at about 7 MPH. The route is roughly a rectangle that’s approximately 35 miles east-west and 15 miles north-south, with the start point about 10 miles west of the southeast corner. The ride proceeds clockwise, so we headed west, then north, a very long stretch across the “top” as we head back east, and finally a meandering south and southwest grind into the wind back to Wichita Falls.

Frank seemed to like being left alone to zone out, so I left him behind shortly after the 10 mile mark and set to the business of riding. I couldn’t get over the number of people out there. At one point–near the 15 mile mark–I topped a little rise (the flatlanders called it a hill) and could see two or three miles in front of me. As far as I could see, the road (two lanes by this time, but with decent shoulders) was packed with cyclists. I’d never seen that many people still together 15 miles into a ride. It thinned out over time, of course, but in the entire 100 miles there never was a point that I couldn’t see several dozen riders sharing the road with me.

I do most of my training alone, so I’m still a bit uncomfortable joining a paceline, in part because I’m afraid I’ll do something wrong and cause a wreck, and in part because I have a difficult time trusting that the people in front or behind me won’t cause me to wreck. On the plus side, cooperating in a paceline can save you a lot of energy and increase your overall speed because you get the benefit of drafting off the others and–especially in a big group–only have to hammer it out up front very rarely. I joined a few pacelines along the way, pitching in to help when it was my turn, and had a few impromptu lines form behind me from time to time when I’d pass a group of riders. I stayed with one group for a good 10 miles or so until they pulled off at a rest stop while I kept going. I just couldn’t find a group that was maintaining a speed that I found comfortable.

Riding alone has its benefits. I can share short conversations with people and then fall back or speed up, wishing them a good ride and going back to enjoying the sights and the antics of the riders and spectators. And there were lots of spectators. Every little town we rode through had groups of people sitting along the road cheering us on. And, of course, there’s the mild amusement of rural Texas. The little town of Electra, Texas, for example, boasts “The Pumpjack Capital of Texas!” They even have a Pumpjack Festival. There were a few others I laughed at as I passed, but I don’t recall them now. And although I had my camera in my jersey pocket, the road was too crowded at first and I was too exhausted later to even think about fiddling with a camera while I was riding.

The town of Electra is at the 30 mile mark. I had originally planned to stop there for water, but with the cool weather and the light tailwind for the previous 10 miles I hadn’t had to drink as much as I expected. I elected to push on to the 40 mile mark before stopping. I was moving along at a good clip, too. At the 30 mile point, I was averaging almost 21 MPH–much faster than I expected, even on this flat course.

Skipping the 30 mile water stop was a sound decision. I really did have enough water and food to take me to the 40 mile point without trouble. At about 37 miles, I made the turn from north to east and began the 40 mile trek across the “top” of the course. I picked up another paceline and burned a little too much energy staying with them before I realized that I couldn’t maintain that heart rate. I was letting the excitement of the ride and my surprising speed overrule my judgement, and as the 40 mile stop approached I decided that I could make it to the 50 mile stop. That was a very poor decision. I had less than half a bottle of water left and no food except some peanuts, and I’d already determined that I didn’t like peanuts quite that much for riding food. Two miles later I realized that I’d done a stupid thing, but there was no way I was going back.

One thing non-cyclists don’t realize is how rough these country roads can be. A road that feels just a little rough when you’re driving over it in a car can be torture on a bicycle. In a car, you’re riding on tires that are at least six inches wide and sitting on a soft seat insulated from the road by springs and shock absorbers. You don’t even notice small shallow dents in the road. A road bicycle tire, on the other hand, is about an inch wide and there is no suspension. The fork and frame absorb some of the bumps on the road, but you feel a two inch wide hole that’s only 1/4 inch deep. Maintenance on these country roads consists of chip sealing, which results in less-than-smooth (to be kind) riding surface. After the first 30 miles or so, it seemed like the entire ride was on chip sealed roads. A few miles of chip seal is a minor annoyance. 10 miles or more is just punishing. Since the bike doesn’t absorb the shock, you have to: in your hands, wrists, shoulders, back, and butt. There’s no doubt that rough roads wear you down.

The “50 mile” stop was actually at 54 miles, and I was out of water. I stopped at the rest area, refilled my water bottles, drank as much as I could comfortably hold, ate some fruit and cookies, and lounged around for a few minutes. They had a band called, I think, Red Dirt Surf, playing surf guitar music. I like surf guitar in small doses, and it really seemed to fit here. I also chatted for a few minutes with the ham radio operators who had a tent there at the stop before climbing on my bike and heading out again. I had been off the bike for about 12 minutes.

I finished the first 54 miles of the ride without stopping, with an average speed of 19.8 MPH, which I’m pretty sure is the fastest 50 miles I’ve ever ridden on a bike. But when I pulled away from that stop, I realized two things. One, the wind had picked up a bit. It was still from the south, but it had become strong enough to be a nuisance as I headed east. The other thing I realized was that I wasn’t going to finish the second half of the ride nearly as fast as I did the first half. I was mildly dehydrated, and I had burned a little bit more energy than I should have. I made a conscious decision to slow down a bit, drink more, and try to rebuild some energy.

It’s funny how one’s memory of things changes once the pain sets in. After leaving that rest area, I stopped looking at the sights and concentrated more on my riding: picking the smoothest possible line (in the right tire track, usually), maintaining a good posture, pedaling as smoothly as possible, keeping an eye on my heart rate monitor, and remembering to drink regularly. My stomach was a little upset (I think it was the peanuts), so I had a tough time getting myself to eat very much. At least I put Gatorade mix in two of my three water bottles and forced myself to drink it even though by now I’d become pretty sick of the taste. About the only things I remember between mile 54 and mile 69 where I stopped again were the town of Burkburnett (the biggest town we passed through, other than Wichita Falls), and the little party going on at Hell’s Gate–the cutoff point that riders have to make before 12:30 if they’re going to do the entire 100 miles. I had no trouble there; I passed Hell’s Gate well before 11:00.

I do recall that, as I approached the rest stop at 69 miles, it dawned on me that this was the furthest I’d ridden this season. My longest training ride was only 65 miles, and I felt a whole lot worse on that ride than I was feeling at the moment. That gave me a little lift. I stopped again at 69 miles, refilled my water bottles, ate a bit more, and sat down under the tent for a few minutes with a cold towel on the back of my neck. I drank a bit, got to feeling better, and headed out again after less than 10 minutes.

The next 10 miles weren’t too bad. We were working our way towards the northeast corner of the course. There was one jog north that felt good with the wind at my back, but I knew I’d have to pay for it later when we turned to head back into the wind. That happened at about 78 miles. Mine wasn’t the only groan when we made a hard right turn and felt that wind directly in our faces.

I stopped again at 84 miles to fill the water bottles and sit down again. I wasn’t eating enough, but I feared that if I did it’d just come right back up. Cold towels on the back of the neck worked wonders to help me cool down, and I even managed to soak my bandana in ice water before taking off. With hair as short and thin as mine, I have to wear a head covering under my helmet or I end up with a rather painful sunburn.

Pulling away from the 84 mile stop, I fully planned to ride it in from there. Even as tired as I was, I couldn’t imagine not being able to ride the last 18 miles (yes, the course is actually 102 miles). I even got a good chuckle a few miles down the road when I spied the First Baptist Church of Dean (one of four buildings in the big town of Dean, TX) and thought of taking picture to send to my friend Dean. But that would have taken effort. There was a rest area at one of the other three buildings there, and I decided I’d take another break. My average speed was already way down from the nearly 20 MPH I’d established in the first half of the ride, and I had given up on the idea of finishing the course in under six hours. Plus, there was a nice big shady spot on the grass.

The stop was at 92 miles. I had only 10 miles to go, but I was ready to be done. I refilled the water bottles, took off my helmet, and laid on the grass in the shade for 20 minutes. I might even have nodded off for a few minutes. I helped a guy pump up his tire (he had a slow leak and didn’t want to take the time to replace the tube), then grudgingly climbed on the bike again for the last 10 miles.

Perhaps not surprisingly, I started feeling real good almost immediately after I got back on the road. Maybe it was the rest, and maybe it was the prospect of being finished. We were close to the big city again, meaning the roads had improved and there were people on the road cheering us on. The other cyclists around me were feeling good too, it seemed, and we were sharing some laughs and dark humor about the state of the roads we’d so recently covered.

There’s an “outlaw” rest stop–apparently not officially part of the ride–somewhere along there, maybe three miles from the finish. I think it’s a bar. They had a heck of a party going on, and were offering free beer to riders. It was sorely tempting, but I knew that if I stopped there, I’d never complete the ride. I let my better sense prevail and rode the last few miles to the finish.

Maybe a mile from the finish, the route climbs an overpass that isn’t much of a hill, but at 100+ miles any hill seems like a mountain. Plus, it was into the wind and on a fairly rough shoulder. But getting to the top was well worth it. From there, I could see the home stretch: just down an exit ramp, a few turns through the flat and smooth city streets, and a four-block straight run to the finish line. A couple of people passed me on that straight, pushing to “finish strong.” I just rode it in at my normal pace, figuring that saving a few seconds wasn’t going to make much of a difference in my time.

I completed the ride in six hours and 55 minutes, with an average overall speed of about 14.8 MPH and an average moving speed of 16.8 MPH. I spent 6:05 pedaling and 50 minutes at rest areas. Time off the bike is what kills your time in a long ride.

My major mistake in this ride was passing up the 40 mile stop. Had I stopped there to rest, refill my water bottles, and eat something, I would not have become dehydrated. I was smart enough to realize my mistake and try to recover (a good thing), but I probably should have taken it a bit easier between 54 and 69, and eaten more even though the thought of doing so turned my stomach. It sneaks up on you, and by the time you realize you’re dehydrated, it’s too late to recover without seriously slowing down.

Still, 6:55 is close to the fastest I’ve ever covered 100 miles, and I’m reasonably happy with my performance considering my abbreviated training period this year. I’m disappointed that I made the mistake of pushing on past the 40 mile mark without stopping to refuel, but glad that I realized my mistake and took steps to minimize the damage. Next time I’ll know better. Right?

Everything considered, it was a great time. I’m looking forward to next year, tent camping and all.