Remove hardware mystery

Two different times now, when trying to “safely remove” a USB hard drive, I’ve had this error message pop up when I’m not actually accessing the drive.

I closed all open applications, even logged out to make sure that no user applications were open.  When I logged back in and tried to remove the device, I got the same error message.

The Windows 2008 Resource Monitor tells me that the System process (process ID 4) has two files open on the device:  F:\$LogFile (NTFS Volume Log) and F:\$Mft (NTFS Master File Table).  Why it’s holding those files open when I told it to remove the device is beyond me.  And I have absolutely no idea how to tell Windows to let go of the device.

I just realized that both times this happened, I was logged in to the machine via Remote Desktop.  That shouldn’t be an issue, but it’s probably worth looking into.  I know that I’ve removed drives via Remote Desktop before.

If anybody has a clue why this is happening, I’d sure like to hear about it.

Update Wednesday, August 27

It happened again yesterday, so I downloaded Process Explorer to see if I could get a little more information.  Searching for handles to “F:\” produces these search results:

That tells me what handles are open, but it sure doesn’t give me much in the way of useful information.  It seems like the remove hardware action should tell those services to let go of the handles.

There is a way within Process Explorer to force-close those handles, but attempting to do so results in a dire warning about possibly causing a crash or instability, and I wasn’t prepared at the time to crash my server.  Closing the Remote Desktop window and logging in as Administrator at the machine’s console didn’t allow me to remove the drive.  So I just pulled the plug.  No ill effect.

After disconnecting the drive yesterday, I took it to the data center, copied some stuff to it, and brought it back here.  I connected the drive, ran the program that copies data off the drive, and attempted to disconnect it again.  That time it worked.  As far as I recall, I performed the same steps as I always do.  Why it worked this last time when it hasn’t worked previously is beyond me.

Should VPN be this hard?

Last week we moved the crawlers from our office to a real data center where we can get more, and more reliable, bandwidth. Getting everything installed and working wasn’t too much trouble, although the next time I have to do something like that I’m going to do a lot more pre-installation work here at the office before taking the machines to the data center. Installing and configuring 10 machines while standing in the cold, noisy data center isn’t my idea of a good time.

Having machines at the data center means that we need some way to log in and check on them. Not a problem, as the Cisco security appliance we bought supports VPN. And configuring the Cisco IPSec VPN was quite simple. I was pretty happy when, with just an hour of looking at the documentation and fiddling with the configuration, I was able to log in to the VPN from my laptop. I packed up my stuff and headed back here to get everybody set up to use the VPN.

And then I found out that Cisco’s IPSec VPN client won’t run on 64-bit versions of Windows.  Nor does Cisco have any plans to upgrade it.  Since I’m not willing to create a 32-bit virtual machine just for running the VPN client, that leaves me with the option of configuring the router for some other type of VPN.  And there things get difficult.  The documentation that came with the router doesn’t discuss any type of VPN configuration other than IPSec, and the online documentation I’ve seen makes the assumption that I understand everything there is to know about VPN.  It gets confusing in a real hurry.

There are VPN standards.  There are so many, in fact, that no mere mortal can begin to understand them.  It might as well be a free for all with all those competing protocols.  Just the acronyms are enough to push a questionably sane person such as myself over the edge into babbling lunacy.  I’ve yet to find a document that explains, in terms a reasonably bright person who hasn’t passed Cisco’s certification can understand, how to configure the VPN.  I can’t even find a good discussion of the benefits and drawbacks of the different VPN technologies:  IPSec, L2TP, or SSL.

I also need to configure VPN on our pfSense box here at the office.  That looks almost as daunting as the Cisco’s configuration and the documentation is, if you can imagine, even worse.

I realize that much of my frustration stems from my lack of expertise in this area.  I’m a programmer, not a network admin.  But I have to think that VPN just doesn’t need to be this hard.

I can find lots of “how VPN works” types of discussions online, but they’re presented at a very high level.  There also is plenty of detailed documentation about VPN configurations for very specific situations.  But I’ve found nothing in the middle.  Something like “Simple VPN configuration for people who don’t live and breathe this stuff.”

Pointers to good discussions of the different types of VPN, and good tutorials about configuring VPN on the Cisco ASA or pfSense would be greatly appreciated…

Paranoia versus productivity

We had an interesting discussion at the office about how much validation a collection type should do in its constructor. The key question, I think, came down to this:

If the constructor can determine that using the instantiated object will throw an exception, should the constructor fail rather than returning the instantiated object?

In other words, if I know that the instantiated object won’t work, shouldn’t I just throw the exception now, rather than let you be surprised later?

There are two extremes here: 1) the constructor should go to heroic efforts, and; 2) let the buyer beware. I tend to lean towards putting the onus on the caller, figuring that whoever is instantiating the object knows what he’s doing. Let me provide an example.

Consider the .NET SortedList generic collection type. To do its job (that is, keep a collection of items sorted), it requires a comparison function. If you don’t specify a comparison function when you call the constructor, the collection uses the default comparison function for whatever type you specify as the key. This sounds simple enough, right? A list of employees that’s sorted by employee number, for example, would be defined like this:

SortedList<int, Employee> Employees =
    new SortedList<int, Employee>();

Because the System.Int32 type (which the C# int type resolves to) implements IComparable, everything works.

But imagine you have an EmployeeNumber type:

class EmployeeNumber
{
    public string Division { get; private set; }
    public int EmpNo { get; private set; }
    public EmployeeNumber(string d, int no)
    {
        Division = d;
        EmpNo = no;
    }
}

Now, if you create a SortedList that’s keyed on that type, you’ll have:

SortedList<EmployeeNumber, Employee> Employees =
    new SortedList<EmployeeNumber, Employee>();

Allow me to show the entire program here, so we don’t get confused.

using System;
using System.Collections;
using System.Collections.Generic;
using System.Linq;
using System.Text;

namespace genericsTest
{
    class EmployeeNumber
    {
        public string Division { get; private set; }
        public int EmpNo { get; private set; }
        public EmployeeNumber(string d, int no)
        {
            Division = d;
            EmpNo = no;
        }
    }

    class Employee
    {
        public string Name { get; set; }
        public Employee(string nm)
        {
            Name = nm;
        }
    }

    class Program
    {
        static SortedList<EmployeeNumber, Employee> Employees =
            new SortedList<EmployeeNumber, Employee>();

        static void Main(string[] args)
        {
            Employees.Add(new EmployeeNumber("Accounting", 1),
                new Employee("Sue"));
            Employees.Add(new EmployeeNumber("Dev", 2),
                new Employee("Jim"));
        }
    }
}

If you compile and run that program, you’ll see that it throws an exception when it tries to add the second employee to the list. The program fails because it can’t compare the items. Neither implements IComparable.

Those who lean towards the first extreme above will argue that the SortedList constructor should determine that the key type doesn’t implement IComparable, and should prevent you from instantiating the collection. It should throw an exception because it knows that trying to add items to the collection will fail.

The constructor could do this. It’s possible for the constructor to get the default comparer and call it. If the comparison function returns a value, then all is well. If it fails, then the constructor throws an exception saying, “Sorry, but you didn’t supply a comparison function.”

The only problem with that scenario is that it’s wrong. Not wrong philosophically, but wrong in a very concrete sense. Extending my example will illustrate why.

Suppose you have two different types of employee numbers. Maybe an OldEmployeeNumber that looks like the one I defined above, and a NewEmployeeNumber that has different fields. Because you want to keep both employee number types in the same list, you define a base class, EmployeeNumberBase from which they both can inherit. The definitions would look like this:

abstract class EmployeeNumber : IComparable
{
    // Some common employee number functionality goes here.

    public int CompareTo(object obj)
    {
        throw new NotImplementedException();
    }
}

class OldEmployeeNumber : EmployeeNumber, IComparable
{
    public string Division { get; private set; }
    public int EmpNo { get; private set; }
    public OldEmployeeNumber(string d, int no)
    {
        Division = d;
        EmpNo = no;
    }

    int IComparable.CompareTo(object obj)
    {
        int rslt = 0;
        if (obj is OldEmployeeNumber)
        {
            var o2 = obj as OldEmployeeNumber;
            rslt = Division.CompareTo(o2.Division);
            if (rslt == 0)
                rslt = EmpNo.CompareTo(o2.EmpNo);
        }
        else if (obj is NewEmployeeNumber)
        {
            // OldEmployeeNumber sorts before NewEmployeeNumber
            rslt = -1;
        }
        return rslt;
    }
}

class NewEmployeeNumber : EmployeeNumber, IComparable
{
    public string Country { get; private set; }
    public decimal EmpNo { get; private set; }
    public NewEmployeeNumber(string c, decimal no)
    {
        Country = c;
        EmpNo = no;
    }

    int IComparable.CompareTo(object obj)
    {
        int rslt = 0;
        if (obj is NewEmployeeNumber)
        {
            var o2 = obj as NewEmployeeNumber;
            rslt = Country.CompareTo(o2.Country);
            if (rslt == 0)
                rslt = EmpNo.CompareTo(o2.EmpNo);
        }
        else if (obj is OldEmployeeNumber)
        {
            // NewEmployeeNumber sorts after OldEmployeeNumber
            rslt = 1;
        }
        return rslt;
    }
}

Yeah, I know. That’s quite a mouthful.

The EmployeeNumberBase class implements the IComparable interface, but its implementation just throws NotImplementedException. Furthermore, the class is marked as abstract to prevent it from being instantiated. Only derived classes can be instantiated.

The derived classes each explicitly implement the IComparable interface. The company-defined sorting rules are that old employee numbers always sort in the list before new employee numbers. Within the same type, the numbers are sorted using their own rules. [Note here that my CompareTo implementations aren’t terribly robust. They’ll return zero (equal) if the object passed is not of a known type, and they’ll fail if the passed object is null. But those details aren’t terribly relevant to the example.]

Now, the Employees list is created in exactly the same way:

SortedList<EmployeeNumber, Employee> Employees =
    new SortedList<EmployeeNumber, Employee>();

We can then add items to the list:

Employees.Add(new NewEmployeeNumber("USA", 2.002m),
    new Employee("Jim"));
Employees.Add(new OldEmployeeNumber("Accounting", 1),
    new Employee("Sue"));
Employees.Add(new OldEmployeeNumber("HR", 3),
    new Employee("Dana"));

If you make those changes and run the program, you’ll see that it does indeed run, and work as expected, and I didn’t change the comparison function that the constructor sees.

If SortedList had attempted to protect me from myself–that is, call the default comparison function and throw an exception because the comparison function had failed–then this final code would not work. By trying to protect me from myself, it would have prevented me from doing what I wanted to do.

Understand, the above is something of a contrived example. I certainly can’t imagine implementing the employee list that way, even if I did have different employee number types. But somebody else might think it’s a perfectly reasonable thing to do. The point is that there could be very good reasons for instantiating a keyed collection with a key type that does not have a valid comparison function. The constructor cannot know if comparisons will fail.

Which brings us back to the original question: how hard should a collection class (or any library object) try to prevent you from instantiating an object that will fail? In my opinion, the constructor should instantiate the object if the immediate parameters look reasonable. My reasoning is that it’s extremely difficult, if not impossible, to know how the caller will be using the class. As you saw above, making broad assumptions about types in a polymorphic environment can be fatal.

This reasoning extends far beyond the question of how a collection class’s constructor should behave. As programmers, we have to strike a balance between paranoia and productivity. We have to decide daily how much trust to put in the code that calls our methods, and how much we can depend on the code we call. Do we write classes that hold the programmer’s hand to help him across the street, or do we provide a “walk” signal and a warning that says, in effect, “If you cross on red, all bets are off”?

The Government Rant

The best thing about our government is that it never ceases to amuse me. It’s also continuously annoying, but I guess you have to take the bad with the good. It’s not the government itself that amuses me so much, but rather the absurd things that our illustrious Congresscritters do and say in an attempt to garner votes. The most amusing (and also the most frustrating) thing is that constituents continue to be taken in. Rather than making an effort to come up with a solution ourselves, we argue over which totally unworkable plan our elected representatives should vote on. This gives the leeches in Washington Congress incredible leeway to do anything, and then spin their positions to best advantage.

Examples abound. Let’s look at some of the more recent.

Dependence on foreign oil

Our country’s dependence on foreign oil has been a major problem since the Arab oil embargo of 1973. In the 35 ensuing years, Congress has put forth all manner of proposals to “fix” the problem. We’ve funded research into solar, geothermal, tidal, and other natural energy sources, provided incentives and subsidies for domestic oil exploration, coal, ethanol, and all manner of questionable energy saving technologies. Today our government has much more control over energy policy than it did in 1973 and yet we’re more dependent on foreign oil than we were back then.

Seven administrations and countless members of Congress have been “doing something about the problem” for 35 years, and the problem has gotten worse. And yet the vast majority of Americans look to Congress and the President for a solution to high gas prices, all the while cheering for or ridiculing the laughably simple minded, short term proposals that are put forth. Our representatives, of course, couldn’t care less. All they have to do is make themselves look good to their own constituents. As long as they can keep the voting public believing that government is the solution, their jobs are secure.

Every thinking American (and, sadly, I’m beginning to believe that the number is falling fast) knows that the solution to our energy problems requires conservation, domestic oil and gas production, development of nuclear plants, exploitation of wind, thermal, solar, and other natural sources, and research into more energy efficient transportation and buildings. We won’t solve anything unless we address all of those areas. And it’s going to take time. Government has proven that it’s incapable of formulating and implementing a workable energy policy. It’s time to get government out of the picture. No more subsidies, incentives, or preferential treatment. Let the market decide.

Tax Rebates

This is one of the dumber things I’ve seen Congress do. And, yes, I realize that both the 2001 and the 2008 rebates were initially proposed by President Bush. That doesn’t relieve Congress of their complicity and their ultimate responsibility. The 2001 rebate was “justified” by a “budget surplus”–a surplus that anybody with a fifth grade education knew was an illusion. This year’s rebate was “justified” by the current economic situation. Congress would have you believe that a windfall of a few hundred dollars (up to $1,200, as I recall) would “stimulate the economy” and soften the recession. Any thinking person could have told you that the result would be a short term spike in consumer spending, followed by a quick return to normal. I can’t prove this yet, but I suspect that it also resulted in people putting down payments on things they can’t afford, figuring they’d find a way to make the monthly payments.

Congress, of course, knew that the tax rebates wouldn’t have an effect on the economy other than to increase the size of the federal debt. But that’s okay. What’s a few billion more dollars compared to the time honored tradition of buying votes? It is an election year, after all. Besides, it made for good press coverage and retail store managers drooled over the prospect of Christmas in July. The rebates seem so popular that Senator Obama proposed a $1,000 rebate to fight energy costs.

The reaction of those receiving the rebates was predictable. Most squandered it like drunken sailors on leave. Those few who know the names of their Congressmen or Senators might have lifted a glass in salute, but most just thanked the government for the handout. That’s what surprises me the most. It’s like having somebody cut your arm off at the shoulder and then thanking him when he returns the forearm and hand. Idiots.

The “mortgage crisis”

This one is fun because there are so many levels of idiocy. Lenders made high-risk loans to people who were demonstrably incapable of paying them back, then sold those loans to a government sponsored enterprise, which ultimately will be bailed out by taxpayers when the original borrowers default.

When borrowing money in good faith, both the lender and the borrower are responsible for ensuring that the money can be paid back. But when the lender is just a middleman who gets paid for making the loan and selling it to somebody else, there is little incentive for him to vigorously check the borrower’s documentation. On the contrary, there is ample incentive for him to be very creative in putting together a loan package, both by making the terms of the loan appear attractive to the borrower and by making the borrower look attractive to the third party who’s buying the loan. Sure, the middleman will eventually be found out, but the short term rewards are incredible.

And when the ultimate buyer is a government sponsored enterprise like Fannie Mae or Freddie Mac, there is almost no oversight. When you have, with government’s blessing, a virtual monopoly on the secondary mortgage market, you know that you’ll get bailed out if things go bad. So where’s the incentive to insist on real documentation for the loans that you buy?

I’m not an economist by any stretch of the imagination. I’m not even a financial analyst. But I’m not an idiot, either. I and many others saw this coming three years ago. Congress ignored the problem at the time, or discounted it as scare mongering. I’ll go out on a limb here and say that most of them probably knew what was coming. But they also knew that there wasn’t anything they could do about it and that bringing it up would be very unpopular. Our elected representatives are many things, but stupid is not one of them.

Now that the real extent of the problem has become apparent, Congress is all over it with one proposal after another. They’re “doing something about the problem.” They know that there are only two possible solutions: either pump money into Fannie Mae and Freddie Mac to keep them afloat, or cut them loose and let people finally endure the consequences of their actions. We know, just by the the nature of elected officials, what their solution will be: another hundred billion dollars or more shelled out to fix a problem that Congress created in the first place. And We the Sheeple just nod our heads and thank Congress for taking care of us once again.

More is better?

All three of the above examples demonstrate extreme incompetence on the part of government. The Congress-proposed solution to those problems, as with all others, is more government regulation. As if making even more and larger bureaus, agencies, and departments will somehow transform government into an intelligent and effective organization. And we let them do it! When will people learn that the cure for a headache is to stop beating your head against the wall?

I used to get upset when I’d think about this stuff. I used to rant and carry on about the proper function of government, and how intrusive government is in our daily lives. But nobody listens. Nobody seems to care. I learned a while back to stop bashing my head against that particular pile of bricks. Now I just laugh and hope that the coming violent overthrow (which will almost certainly happen if government continues on its current path) doesn’t occur until after I’m gone.

Hey, you deleted my files!

We got a rather strongly worded message the other day from a Webmaster who was threatening legal action because our crawler deleted a bunch of files from his site. The news that our crawler is capable of deleting files was quite a surprise to us. Like other crawlers, ours just downloads HTML files, extracts links, and then visits those links. There is no “delete a file” logic in there.  But if the crawler stumbles upon a link whose action is to delete a file, then visiting that link will indeed delete the file.

Further investigation in this particular case revealed a file management page that includes, among other things, links that have the form: www.example.com/files/?delete=filename.txt. Surprisingly enough, clicking on that link deletes the file. The file management page is not protected by a password, nor is there any kind of confirmation displayed before the file is permanently deleted.

Examining the logs, we saw accesses from other search engine crawlers. We also learned from the Webmaster that some time back, a kid had “hacked in” to the site and deleted a bunch of files.

I’m a little surprised that anybody would create such a page and not provide any protection.  I’m very surprised to find out that a supposedly professional Web developer would do such a thing and not learn the lesson when a random surfer came in and deleted files. And I’m shocked that, even after we explained this to the Webmaster, he insists that we can take this as an opportunity to learn from our “mistake” and “fix” the crawler so that it doesn’t happen again.

It’s unfortunate that our crawler visited those links, causing the files to be deleted. But the mistake was on the part of the person who posted those destructive links. The crawler was operating exactly as it should. Exactly, in fact, as every major search engine crawler acts.  It’d be nice if we could imbue the crawler with enough intelligence to “understand” Web pages and know in advance what the effects of clicking a link will be. But that kind of machine intelligence is far, far in the future.

If you post something on the Web, it will be found, unless you take active measures to protect it. Posting a destructive link on an unprotected page and then blaming somebody else when the link is clicked by an “unauthorized” person is akin to running out into a busy street and then blaming your injuries on the driver of the bus that hits you. Or wiring a bomb to your doorbell and blaming the person who presses the button for blowing up your house.

Multicore crisis?

There’s been some talk recently of the next “programming crisis”: multicore computing. I’ll agree that we should be concerned, but I don’t think we’re anywhere near the crisis point. Before I address that specifically, I think it’s instructive to review the background: why multicore processors exist, how they affect existing software, and the issues involved in writing code to make use of multiple cores.

Moore’s law has been quoted and misquoted so often that it’s almost a cliché. His original statement was simply an observation on the rate at which transistor counts were increasing on integrated circuits, and that he expected the trend to continue for at least 10 years. That was 1965. The trend has continued, and there’s no indication that it will slow.

Some people think Moore’s Law has become something of a self-fulfilling prophecy: because we believe that it’s possible, somehow we strive to make it so. One wonders what would have happened if Moore had said that he expected the rate of growth to increase. Would transistor densities have increased at an exponential rate?

Self-fulfilling prophecy or not, it’s almost certain that the trend in increasing transistor densities will continue (it has through 2007) and that as a result we’ll get ever more powerful CPUs as well as faster, higher-capacity RAM. Absolute processor speed as measured by clock rate will continue to increase, but not at the astounding rates that we saw up to 2005 or so. Quantum effects and current leakage have put a little damper on the rate of growth there. Better materials will solve the problem–are solving the problem–but absent a fundamental breakthrough by the chemists working on the problem, clock speeds won’t be doubling every 18 months like they had been in the recent past. The Clock Speed Timeline graph makes this quite evident.

Today’s trend is towards multiple cores on a single processor, running at a somewhat slower clock rate. The machine I’m writing this on, for example, has a quad-core Intel Xeon processor running at 2 GHz. The clock speed is somewhat slower than you can get in a high-end Pentium, but the multiple cores provide more total computing power. Quad core processors today are quite common. Intel demonstrated an 80 core chip in February of 2007, and promised to deliver it within five years. I fully expect to have a 256-core processor in my desktop computer ten years from now.

The trend towards multiple cores and very slowly increasing clock rates has some interesting ramifications for software developers. In the past, we have depended on more RAM and faster processors to give us some very nice performance boosts. All indications are that the amount of available RAM and the size of on-chip caches will continue to grow, but we can’t count on the biannual doubling of processor speed. Unless we learn to write programs that use multiple cores, we will soon reach a very real performance ceiling.

Not all applications can benefit from multiple cores, but you’d be surprised at how many can. And even in those cases when a single program can’t make use of multiple cores, users still benefit from having a multicore processor because the machine is better at multi-tasking. Imagine running four virtual machines on one computer, for example. If the computer has a single processor core, all four virtual machines and all of the operating system services share that one core. On a quad-core processor, the work load is spread out over all four cores. The result is more processor cycles per virtual machine, meaning that all four virtual machines should run faster.

Software systems that consist of multiple mostly-independent processes can make good use of multicore processors without any modification. Consider a system consisting of two services that are constantly running. On a single-core computer, only one can actually be working at a time. You could almost double performance simply by upgrading to a dual-core processor. Such software systems are quite common, and they require no code changes in order to benefit immediately from the new multicore processor designs.

Contrary to popular belief, writing code that is explicitly multi-threaded–designed to take advantage of multiple cores–isn’t necessarily a huge step up in complexity. Such code can be much more complex than single-threaded code, but it doesn’t have to be. Some programs are more multi-threaded than others. I’ve found it useful to think of programs in terms of the following four levels of complexity:

  1. No explicit multithreading.
  2. Infrequent, mostly independent asynchronous tasks.
  3. Loosely coupled cooperating tasks.
  4. Tightly coupled cooperating tasks.

Obviously, it’s impossible to draw exact boundaries between the levels, and many programs will use features found in two or more of the levels. In general, I would classify a program by the highest level of multi-threading features that it uses.

Level 1 requires little in the way of explanation. This is the most common type of application in use today. In a batch mode program, execution proceeds sequentially from start to finish. In a GUI program, user interface events and processing execute on the same thread. This type of application has served us well over the years.

Most Windows programmers have some experience with the next level of complexity. A GUI application that performs background processing and periodically updates the display is an example of this type of program. Typically, the program starts the background process, which from time to time raises events which the GUI thread handles and updates the display. Data and process synchronization between tasks is limited to the event handlers that respond to asynchronous events. Modern development environments make it very easy to create such programs. These programs can benefit from multiple processor cores because the background thread can operate independently of the the GUI thread, making the GUI thread much more responsive.

I have found the third level of complexity–loosely coupled cooperating tasks–to be a very useful and relatively simple way to make use of multiple cores. The idea is to construct a program that operates in an assembly line fashion. For example, consider a program that gathers input, does some complex processing of the input data, and then generates some output. Many such programs are processor bound. If you structure the program such that it maintains an input queue, a pool of independent worker threads, and an output queue, then there is little danger of running into the problems that often plague more complex programs. You have to supply synchronization (mutual exclusion locks, or similar) on the input and output queues, but the worker threads operate independently. Using this technique on a quad-core processor, it’s possible to get an almost 4x increase in throughput over a single-core processor, with very little danger of running into resource contention issues.

Written correctly, programs that have multiple tightly-coupled cooperating tasks make the best possible use of processor resources. However, explictly coding thread synchronization is perhaps the most difficult type of programming imaginable. Forgetting to lock a resource before accessing it can lead to unexplained crashes or data corruption. Holding a lock for too long can create a performance bottleneck. Locks that are too granular increase complexity and also the chance for deadlock situations. Locks that are not granular enough will stall worker threads. Race conditions are endemic. Assuming you get such a program working, even a small change will often cause new, unanticipated problems. Writing this kind of code is hard. You’re much better off re-thinking your approach to the problem and casting it as a Level 3 problem. Whatever price you pay in performance will be returned many fold in increased reliability and reduced development time.

If you’re writing a Level 3 or Level 4 program, you should very seriously consider using a existing multi-tasking library if at all possible. Doing so will require that you think about your problem differently, but you leverage a lot of known-working code that is almost certainly more robust in all ways than what you’re likely to write yourself in the time allotted. Two good examples of such libraries are the Parallel Extensions to .NET 3.5 and the Java Parallel Processing Framework. Such libraries exist for many other programming environments. Although still in their infancy, these libraries promise to greatly simplify the move to multicore. If you’re contemplating development of a program that makes good use of multiple cores, you definitely should learn about any parallel computing libraries that support your platform.

So, back to the crisis. Bob Warfield over at SmoothSpan Blog has had and continues to have quite a lot to say about it, and many others share his sentiments. I, on the other hand, don’t think we’re anywhere near the crisis point. Nor do I think we’re likely to get there. Whereas it’s true that most current software isn’t multicore ready, software developers have understood for several years now that they need to begin writing applications that take advantage of multiple processor cores. It’s likely that some shops have taken an ad hoc approach to the problem, and they’re probably suffering with the issues I pointed out above. It’s also likely that many (and I would hope, most) development shops have done the prudent thing and adopted a parallel computing library that takes care of the difficult areas, leaving the programmers to worry about their specific applications. Doing so is no different than adopting an operating system, development environment, GUI library, report generator, or any other third party component–something that development shops have long experience with.

In short, the multicore “crisis” that the doomsayers are warning us about is almost a non-issue. It’s going to require a small amount of programmer retraining and there will undoubtedly be a temporary plateau in the rate at which our processing of data increases, but in a very short time we’ll again have mainstream applications that push all this fancy hardware to its limits.