More bailout madness

In a very thinly reported move last week, the Federal Reserve announced that it will spend up to $600 billion buying up obligations of government-sponsored enterprises, and mortgage backed securities, many of which were the original targets of the $700 billion “bailout” plan back in October.

To me, the most interesting thing about this move is that it won’t cost the Federal Reserve anything. Well, whatever it costs for paper, ink, and the electricity to run the presses while they print up $600 billion in crisp new bills. That’s right, they’re going to “pay” for those assets by increasing the amount of money. Not a bad racket if you can get it. All of a sudden the Fed has $600 billion to spend that it didn’t have to get by running, hat in hand, to Congress. Nope. Just print up the bills and nobody’s the wiser. Never mind that the dollars you’re holding today are worth slightly less than they were yesterday.

Something similar is happening when the Treasury injects money into the banks. In exchange for their $25 billion, a bank gives the government shares of stock. They’re special non-voting shares, which is a good thing, but they’re new shares that dilute the value of existing shares. Stockholders end up paying for it in two ways: the value of their shares is diluted, and they have to pay increased taxes to offset the government expense (or, if new money is printed, pay the hidden tax of inflation).

How about a simple example to illustrate the concept. Imagine that your neighborhood creates a lawnmower co-op. For $10, you get one of 10 shares in the co-op. But the co-op falls on hard times–needs to upgrade the mower, maybe. The co-op issues 10 new shares, bringing the total number of shares to 20, and sells those shares to new members at $5 each. All of a sudden, your share of the co-op is worth 50% less. That’s bad enough, but you also get hit with an assessment from the co-op for increased maintenance costs. That’s pretty much what’s happening to stockholders when the banks take the bailout money.

I thought, back in September, that Bernanke, Paulson, and company had a real plan, backed up with solid reasoning, and some idea of what effects their proposed policies would have on the financial system and the economy. I didn’t agree with their ideas, but I thought I could at least respect their reasoning. But three months later, I’m not so sure. I think that they, just like Congress and the Bush Administration in general, are trying anything in an attempt to get a short-term boost, regardless of the long-term consequences.

The best bet for everybody would be for governments to step back and take an honest critical look at the situation, study possible responses and their likely outcomes, announce a plan, and then stick with it. The current whac-a-mole approach is worse than doing nothing (which, come to think of it, still doesn’t sound like a bad idea). Changing approaches every few weeks does nothing but add uncertainty, causing people to overreact and creating chaos when the thing we need most right now is stability.

No IEnumerable ForEach?

Overall, I like working with C# and the .NET Framework.  But sometimes I just can’t imagine what the designers were thinking when they put some things together.  High on the list of things I don’t understand is the lack of a ForEach extension method for the generic IEnumerable interface.

IEnumerable extension methods were added in .NET 3.5 to support LINQ. These methods enable a more functional style of programming that you just couldn’t do with earlier versions of C#. For example, in earlier versions of C# (before 3.0), if you wanted to obtain the sum of items in an array of ints, you would write:

int[] a = new int[100];
// some code here to initialize the array
// now compute the sum
int sum = 0;
foreach (int x in a)
    sum += x;

In .NET 3.0, the Array class implements the generic IEnumerable interface, and that interface has a Sum method to which you pass a Lambda expression. Computing the sum of items becomes a one-liner:

int sum = a.Sum(x => x);

That syntax looks pretty strange, but it takes very little getting used to and you begin to see how very powerful this functional style of programming is.

IEnumerable extension methods let you do lots of different things as one-liners. For example, if you have a list of employees and you want to select just those who are female, you can write a one-liner:

var FemaleEmployees = Employees.Where(e => e.Gender == 'F');

There are extension methods for many specific things: determining if all or any items in the collection meet a condition; applying an accumulator function over the list; computing average, minimum, maximum, sum; skipping items; taking a specific number of items, etc. But there is no extension method that will allow you to apply a function to all items in the list. That is, there is no IEnumerable.ForEach method.

I do a lot of list processing and my code is littered with calls to the IEnumerable extension methods. It’s very functional-looking code. But when I want to do something that’s not defined by one of the extension methods, I have to drop back into procedural mode. For example, I want to apply a function to all the items in the list:

foreach (var item in MyEnumerable)
{
    DoSomething(item);
}

That’s crazy, when I should be able to write:

MyEnumerable.ForEach(item => DoSomething(item));

The lack of ForEach is terribly annoying. What’s worse is that ForEach does exist for arrays and for the generic List class. It’s kind of strange, though. With arrays you have to call the static Array.ForEach method:

Array.ForEach(MyArray, item => DoSomething(item));

For generic List collections, it’s an instance method:

MyList.ForEach(item => DoSomething(item));

And other collection types simply don’t have a ForEach method.

It’s simple enough to add my own ForEach extension method:

public static class Extensions
{
    public static void ForEach<T>(this IEnumerable<T> source, Action<T> action)
    {
        foreach (var item in source)
        {
            action(item);
        }
    }
}

That’s what I did, and it works just fine.  But it seems such an obvious addition (i.e. everybody would want it) that I can’t for the life of me understand why it wasn’t included in the first place.

I know the guys at Microsoft who designed this stuff aren’t stupid, so I just have to think that they had a good reason (at least in their mind) for not including a ForEach extension method for the generic IEnumerable interface. But try as I might, I can’t imagine what that reason is. If you have any idea, I’d sure like to hear it.

An assumption of competence

My second programming job was with a small commercial bank in Fresno, CA, where I helped maintain the COBOL account processing software.  I was still pretty inexperienced, having only been working in the industry for about 18 months.  My previous job involved maintaining software, also in COBOL, for small banks in western Colorado.

One of the first things my new boss asked me to look at was a program that computed loan totals by category:  a two-digit code that was assigned to each loan.  Federal regulations required that we report the total number of and dollar amount of all loans, by category, as well as the number and dollar amount that were 30, 60, and 90 days or more past due.  The problem was that the report program was taking way too long to run.  The bank had recently acquired a bunch of new loans, and the program’s run time had increased sharply—several times more than what one would expect from the increase in the number of loans.

Understand, this was a very simple program.  All it had to do was go through the loans sequentially and compute totals in four columns (total, 30 days past, 60 days past, 90+ days past) for each of the 100 categories.  The data structures are very simple.  I don’t remember enough COBOL to write it intelligently, so I’m showing them translated to C#:

struct CountAndTotal
{
    public int Category;
    public int Count;
    public double Total;
}

// Arrays for Total, 30, 60, and 90+ days past due
CountAndTotal[] TotalAllLoans = new CountAndTotal[100];
CountAndTotal[] Total30Past = new CountAndTotal[100];
CountAndTotal[] Total60Past = new CountAndTotal[100];
CountAndTotal[] Total90Past = new CountAndTotal[100];

I’ll admit that I was a little mystified by the Category field in the CountAndTotal structure, but figured it was an artifact from early debugging.

Those definitions and the description of the problem above lead to a simple loop:  for each loan, determine its category and payment status, and add the totals to the proper items in the arrays. The program almost writes itself:

while (!LoanFile.Eof)
{
    LoanRec loan = LoanFile.ReadNext();
    AddToTotals(loan, TotalAllLoans);
    if (loan.PastDue(90))
        AddToTotals(loan, Total90Past);
    else if (loan.PastDue(60))
        AddToTotals(loan, Total60Past);
    else if (loan.PastDue(30))
        AddToTotals(loan, Total30Past);
}

What surprised me when I looked at the code was the implementation of the AddToTotals function. You would expect it to be a simple index into the array from the loan’s category code. After all, the category was guaranteed to be in the range 0-99. That just begs for this implementation:

void AddToTotals(LoanRec loan, CountAndTotal[] Totals)
{
    ++CountAndTotal[loan.Category].Count;
    CountAndTotal[loan.Category].Total += loan.Balance;
}

What I found was quite surprising. Rather than directly index into the array of categories, the program would do a sequential search of the array to see if that category was already there. If it was, the total was added. Otherwise the program made a new entry at the next empty spot in the array. That explained the mysterious Category field and the absurdly long run time. The code is much more complicated:

void AddtoTotals(LoanRec loan, CountAndTotal[] Totals)
{
    int i = 0;
    while (i < 100)
    {
        if (Totals[i].Category == loan.Category)
        {
            ++Totals[i].Count;
            Totals[i].Total += loan.Balance;
            break;
        }
        else if (Totals[i].Category == -1)
        {
            // Unused position.  The category wasn't found in the array.
            Totals[i].Category = loan.Category.
            Totals[i].Count = 1;
            Totals[i].Total = loan.Balance;
            break;
        }
        ++i;
    }
}

The difference in run times is enormous! The first implementation accesses the array directly from the loan.Category field. The second has to search the array sequentially—an operation that involves, on average, looking at 50 different items every time.  The first version of the program (the way I thought it should be written) is at least 50 times faster than the second.  In addition, the second required a subsequent sort to put things in the proper order before printing the results.

Being new at the job, I went to my boss, explained what I’d found, and said, “What am I missing?”  His response:  “Why do you think you’re missing something?”

He went on to explain that my analysis was correct, and that the industry (at least back then) was full of programmers who had no business sitting at a terminal.  It was something of a revelation to me, because I had assumed that the people who wrote this stuff really knew what they were doing.  It also taught me to question everything when faced with a problem.  It’s always a good idea to assume competence when you start debugging somebody else’s (or your own) code, but when things stop making sense it’s time to re-evaluate that assumption.

More whittling

After I whittled that knife a couple of weeks ago, I tried to make a small decorative spoon. I made two mistakes on that project: 1) I selected the wrong kind of wood and; 2) I used the wrong knife. I took a branch that I’d cut from the pear tree a few months ago and started whittling on it. I noticed immediately that the pear wood is much harder than the juniper I’d whittled the knife from. I found out later that it’s about four times as hard. No wonder I had trouble with it.

The knife I selected is a cheap pocket knife (I was going to say “utility knife,” but that describes a particular kind of knife) that I’d been carrying around for a few months. Like the Buck 112 “hunting” knife I used previously, this one is just too large for detail work. It works fine for day-to-day box opening and such. As a carving tool it leaves a lot to be desired, in large part because the handle is so thin.

Anyway, here’s a picture of the spoon and the knife. By the time I got the basic shape of the thing roughed out, I was so frustrated that I just wanted to call it done.

I’m not particularly proud of the way the spoon turned out, but I certainly learned a lot making it. The pear is a beautiful wood, but I don’t yet have the skill to work with it. I’ve put a few branches up in the rafters of the garage while I work on my technique.

I picked up a small-ish Buck pocket knife and visited the local WoodCraft store to pick up a carving glove, a thumb protector (the spoon cost me two cuts), and a small box of basswood blocks. Then I searched online for a simple project and came across instructions to carve a pinecone tree ornament. It’s a great beginner’s project. I’m sure it took me an absurdly long time to complete the project (I spread it out over about 10 days). I obviously have a lot to learn, but I’m pretty happy with the result:

I had planned to paint them and add some “snow” at the top, but Debra says she’d like to keep them raw. Who am I to argue?

Useful Notepad feature

Somebody pointed out a useful feature of Windows Notepad:  date/time stamp logging.  Imagine you want to keep a diary of sorts in a Notepad file, and have a date/time stamp at each “entry”.  Normally, you’d open the file, press Ctrl+End to get to the end of the file, enter the date and time manually, and then start typing.

You can get Notepad to do that for you, automatically.  Here’s how.

  1. Open a blank document in Notepad and type “.LOG” (without the quotes).
  2. Save the file as diary.txt.
  3. Close Notepad.
  4. Now start Notepad again and open the file that you just saved.

Notepad adds a blank line at the end of the file, enters the time and date, and positions to the end of the file so that you can start typing your notes.  I guess it really is a “notepad” program.

Some people laugh, but I find Notepad to be incredibly useful for writing quick design notes and thoughts.  Sure, it’s a primitive tool.  But I don’t need anything fancy at that stage.  I need something that will open quickly and let me enter text with a minimum of fuss.  I’ve not found anything better for that than Notepad.

Interface annoyances

We ran into a rather difficult class design problem recently that reveals a shortcoming in C# and, apparently, the .NET runtime (specifically, the Common Language Infrastructure, or CLI). It’s a pretty common problem, and I’m a little bit surprised it hasn’t been addressed.

As you know, C# doesn’t allow multiple inheritance. It does, however, allow classes to implement interfaces, which is kind of like inheriting behaviors. The big difference is that with interfaces you have to supply the implementation in your class. With inheritance, you get a default implementation along with the interface.

And before you flame me, please understand that I’m well aware of the many differences between multiple inheritance and implementing interfaces, including the many assumptions that come along with multiple inheritance. The fact remains, though, that the general behavior of a class that implements an interface will be very similar to the behavior of a class that inherits the behaviors defined by that interface. The implementations are quite different, of course, and inheritance carries with it some often unacceptable baggage, but from the outside looking in, things look very much the same.

Interfaces are very handy things, but they can get unwieldy. Consider, for example, an interface called ITextOutput that contains 4 methods:

interface ITextOutput
{
    void Write(string s);
    void Write(string fmt, params object[] parms);
    void WriteLine(string s);
    void WriteLine(string fmt, params object[] parms);
}

The idea here is that you want to give classes the ability to output text strings. If a class implements ITextOutput, then clients can call the Write or WriteLine methods, and the supplied string will go to the object’s output device. So, a class called Foo might implement the interface:

class Foo : ITextOutput
{
    public void Write(string s)
    {
        Console.WriteLine(s);
    }

    public void Write(string fmt, params object[] parms)
    {
        Write(string.Format(fmt, parms));
    }

    public void WriteLine(string s)
    {
        Write(s + Environment.NewLine);
    }

    public void WriteLine(string fmt, params object[] parms)
    {
        WriteLine(string.Format(fmt, parms));
    }
}

Easy enough, right? Except that every class that implements ITextOutput has to implement all four methods. If you look closely, you’ll see that the last three methods all end up calling the first after formatting their output. In most cases, the only method that will change across different classes that implement this interface will be the first Write method. You might want to change the output device, for example, or include a date and time stamp on the output line.

C# does not provide a good solution to this problem. As I see it, you have the following options:

  1. Implement the methods as shown in each class that implements ITextOutput. This is going to be tedious and fraught with error. Somebody is going to make a mistake in all that boilerplate code and the resulting bug will appear at the worst possible time—quite possibly after the product has shipped.
  2. Structure your class hierarchy so that every object inherits from a common TextOutputter class that implements the interface. This is a very good solution if you can do it. For those classes that inherit from some other base class, you can implement the interface as shown above. I have difficulty with this solution because it’s saying, in effect, “Foo IS-A TextOutputter that also does other stuff.” In reality what we really want to say is, “Foo IS-A object (or some other base class) that implements the ITextOutput functionality. It might sound like a fine distinction, but it matters. A lot. Especially when refactoring code.
  3. Forget about inheritance and interfaces and make Foo contain a member that implements the interface. Something like:class Foo { public ITextOutput Outputter = new TextOutputter(); }This will work, but it’s annoying to clients. Rather than calling Foo.Write, for example, they have to call Foo.Outputter.Write. And class designers are free to change the name of the Outputter member to anything. The result is that clients can’t tell by looking at the class declaration if it implements the ITextOutput interface. Instead, they have to go looking for a member variable (or property) that implements it.

Any way you look at it, it’s messy. As a client, I’d expect class designers to bite the bullet and go with the first option, and test thoroughly. In truth, I think that’s the only reasonable option, painful as it is. As a designer, I’d grumble about the need for all that extra typing, but I’d do it. I’d be embarrassed to release a class that implemented either of the other solutions because as a user I’d be annoyed by either of the other implementations. But I sure wish there were another way to do it.

Delphi solved this problem by using what’s called implementation by delegation. The technique involves creating a member that implements the interface (similar to the third option shown above), and delegating calls to interface methods to that object. In C#, if such a feature existed, the syntax might look something like this:

class Foo: ITextOutput
{
    public ITextOutput Outputter =
        new TextOutputter() implements ITextOutput;
}

Clients could then call the Write method on an object of type Foo, just as they would with the first option. The runtime (or the compiler, maybe) would then delegate such interface calls to the Outputter member. We have the best of both worlds: real interfaces, and we don’t have to repeatedly type all that boilerplate code.

I’m not the first one to run into this problem or to suggest the solution. Steve Teixeira mentioned it in his blog over three years ago, and linked to this blog entry from 2003. Steve, by the way, is the one who came up with the idea for Delphi. He says that it’s not currently possible to do such a thing in .NET because it can’t be made verifiably type-safe. I don’t understand why not, but I’ll defer to his judgement here.

This type of thing is trivial to implement in languages that support multiple inheritance. But I don’t think I’m willing to accept the problems with multiple inheritance in order to get this one benefit. It’s a moot point anyway, as it’s quite unlikely that .NET will support multiple inheritance any time soon.

I’d sure be interested to find out how others handle this situation in C# or other .NET languages. Drop me a line and let me know.

Whittling a knife

Debra and I spent last weekend with our friends at their property near Ranger, TX.  We spent two days just kicking back and relaxing:  reading, talking, and playing with the camp fire.  I took my sharpening stone and three different knives that I’d been neglecting for too long.

After putting an edge on my old Buck knife, I grabbed a piece of firewood from the pile and started whittling on it with no particular idea to make anything.  At some point the idea of making a wooden knife struck me, and I ended up spending several hours on it.

Whittled knife and the knife that whittled it

The project started out as a piece of juniper (what they call cedar around here) about 18 inches long and 1-1/2 inches in diameter.  The finished piece is right at 12 inches long and 1-1/4 inches in diameter.  The blade is 3/4 inch wide.

I toyed with the idea of spending more time on it—sanding it down and making the blade thinner at the edge so I could use it as a letter opener—but in the end decided against it.  I like leaving it in this rough form.

It’s been 30 years since I picked up a piece of wood and started whittling on it.  I forgot how relaxing it can be.  I for sure won’t wait another 30 years before I try making something else.  Think I’ll use a different knife, though.  The Buck is a handy tool, but it’s too large for detail work.

That knife, by the way, is sharp.  My old Scoutmaster would be proud.  He’d also be appalled at my clumsiness.  I ended up sinking that blade about 1/4 inch into my thumb on Sunday.

Optimizing the wrong thing

Today I was building a custom hash table implementation and needed a function that, given a number X, would find the next prime number that is equal to or greater than X.  Since X could be pretty large—on order of one trillion—I knew that done in a naive way, it could take a long time (in computer terms) to compute the next prime number.  So I started looking for a smarter way to do it.

The basic algorithm is pretty easy.  Given a number, X, the following will find the next prime number:

V = X
if (V % 2) == 0
    V += 1
while (!IsPrime(V))
{
    V += 2
}
// at this point, V is the next prime number >= X

The simplest way to determine if a number is prime is to see if it’s evenly divisible by any prime number from 2 up to and including the square root of the number in question.  That is, to determine if 101 is prime, you try dividing by 2, 3, 4, 5, 6, 7, 8, 9, and 10. If any division results in a remainder of 0, then you know that the number is not prime and you can stop testing.  Only if all the tests fail do you know that the number is prime. But the square root of one trillion is a million, and I didn’t relish the idea of doing many of millions of division operations to find the next prime number.

A much more efficient method to determine whether a number is prime is to divide it by all the prime numbers up to the square root.  It’s a simple optimization, right?  If the number is not evenly divisible by 2, then it certainly won’t be divisible by 4, 8, 296, or any other even number.  And if it’s not divisible by 5, it won’t be divisible by 10, 15, 20, or 2,465.

That insight can greatly decrease the amount of work you have to do in determining whether a number is prime.  After all, there are only 78,500 prime numbers between 0 and 1,000,000.  So the worst case goes from 500,000 division operations to 78,500.  The only catch is that you need to store all those prime numbers.  That costs some memory.  Naive encoding (four bytes per number) would take about 300K of RAM.  But you can use a table of bits and do it in about 125K.

I was halfway to building the table of primes when I decided to see just how long it takes to find the next prime using the naive method.  I quickly coded up the simplest version and got my answer.  On my machine using the naive method, it takes on average 30 milliseconds (3/100 second) to find the next prime number if the starting number is between 1 trillion and 2 trillion.  Granted, that’s a long time in computer terms.  But in context?

The reason I need to compute the next prime number is that in my hash table implementation the hash table size needs to be prime.  So if somebody asks for a table of 1 million items, I’ll give them 1,000,003.  And when I extend the table, I need to ensure that the new size is prime.  Resizing the table requires that I make a new array and then re-hash all the existing items.  That takes significant time when the table is large.  Fortunately, resizing is a relatively rare operation.

The point is that the function is called so infrequently, and the code that calls it takes so much time, that whatever cycles I spend computing the next prime won’t be noticed.  So the “slow,” simple, naive method wins.

I used to rant a lot about programmers spending their time optimizing the wrong thing, pursuing local efficiency even when it won’t affect the overall program running time. It’s embarrassing when I find myself falling into that trap.

Copying large files on Windows

Warning:  copying very large files (larger than available memory) on Windows will bring your computer to a screeching halt.

Let’s say you have a 60 gigabyte file on Computer A that you wish to copy to Computer B.  Both Computer A and Computer B have 16 gigabytes of memory.  Assuming that you have the network and file sharing permissions set correctly, you can issue this command on ComputerB:

copy /v \\ComputerA\Data\Filename.bin C:\Data\Filename.bin

As you would expect, that command reaches across the network and begins copying the file from Computer A to the local drive on Computer B.

What you don’t expect is for the command to bring Computer A and possibly Computer B to a screeching halt.  It takes a while, but after 20 or 30 gigabytes of the file is copied, Computer A stops responding.  It doesn’t gradually get slower.  No, at some point it just stops responding to keyboard and mouse input.  Every program starts running as though you’re emulating a Pentium on a 2 MHz 6502, using a cassette tape as virtual memory.

Why does this happen?  I’m so glad you asked.  It happens because Windows is caching the reads.  It’s reading ahead, copying data from the disk into memory as fast as it can, and then dribbling it out across the network as needed.  When the cache has consumed all unused memory, it starts chewing on memory that’s used by other programs, somehow forcing the operating system to page executing code and active data out to virtual memory in favor of the cache.  Then, the system starts thrashing:  swapping things in and out of virtual memory.

It’s a well known problem with Windows.  As I understand it, it comes from the way that the COPY and XCOPY commands (as well as the copy operation in Windows Explorer) are implemented.  Those commands use the CopyFile or CopyFileEx API functions, which “take advantage” of disk caching.  The suggested workaround is to use a program that creates an empty file and then calls the ReadFile and WriteFile functions to read and write smaller-sized blocks of the file.

That’s idiotic.  There may be very good reasons to use CopyFileEx in favor of ReadFile/WriteFile, but whatever advantages that function has are completely negated if using it causes Windows to cache stupidly and prevent other programs from running. It seems to me that either CopyFileEx should be made a little smarter about caching, or COPY, XCOPY and whatever other parts of Windows use it should be rewritten. There is no excuse for a file copy to consume all memory in the system.

I find it interesting that the TechNet article I linked above recommends using a different program (ESEUTIL, which apparently is part of Exchange) to copy large files.

This problem has been known for a very long time. Can anybody give me a good reason why it hasn’t been addressed? Is there some benefit to have the system commands implemented in this way?

Update, October 16
It might be that Microsoft doesn’t consider this a high priority.  In my opinion it should be given the highest possible priority because it enables what is in effect a denial of service attack.  Copying a large file across the network will cause the source machine to become unresponsive.  As bad as that is for desktop machines, it’s much worse for servers.  Imagine finding that your Web site is unresponsive because you decided to copy a large transaction file from the server.

Hardware problems

I’ve mentioned before that we use removable drives to transfer data between the data center and the office. Some of those files are very large—50 gigabytes or larger. The other day we discovered an error in one of the files that we had here at the office. The original copy at the data center was okay. Somewhere between when it was created at the data center and when we read it here, an error crept in. There is plenty of room for corruption. The file is copied to the removable, then copied from the removable, transferred across the network, and stored on the repository machine.

The quick solution to that problem is to copy with verify. That takes a little longer, but it should at least let us know if a bit gets flipped.

Saturday we ran into another error when copying a file from the removable drive to its destination on the network:

F: is the removable drive. The machine it was connected to disappeared from the network. I’m still trying to decipher that error message. I can’t decide if we got a disk error or if there was a network error. Did the disk error cause the network error? Or perhaps Windows considers a USB storage device to be a network drive. We removed the drive from that machine, connected it directly to the repository machine, and the copy went just fine. The file checked out okay, leaving me to think that the first machine is flaky.

About a year ago we purchased a Netgear ProSafe 16 port Gigabit Switch (model GS116 v1). It’s been a reliable performer, although it does get a little warm to the touch. Still, we ran it pretty hard and it never had a glitch. We bought another about 6 months ago. Last month, the first one flaked out and started running at 100 Mbps. Not good when you’re trying to copy multi-gigabyte files. This morning, the other one gave up the ghost completely and wouldn’t pass traffic at all.

I suspect that excess heat caused both switch failures. The units were operating in a normal office environment where the ambient temperature is between 75 and 80 degrees. There was no special cooling and we ran the units pretty hard what with the Web crawler and all. As I said, the switch did get very warm to the touch. In a normal office configuration where the switch doesn’t get a lot of traffic, it probably will hold up fine. But I would not recommend this switch for high duty cycles unless you have special cooling for it.