Memory leak in ConcurrentQueue

I learned today that there is a memory leak in the .NET 4.0 ConcurrentQueue class. The leak could explain some trouble I was having in December with a program running out of memory. I didn’t know about this bug at the time, but reducing the sizes of several of my queues seems to have made the problem go away.

I know some of you are thinking, “A memory leak in .NET?  I thought the garbage collector was supposed to prevent that!”  The garbage collector collects garbage–memory that’s no longer referenced.  It can’t collect something that’s still being used.  Let me explain.

The ConcurrentQueue class uses an array as its backing store.  When an item is added to the queue, a reference to the item is placed in the array.  As long as an item is in the queue, there is an active reference to it–some part of the program is depending on that item to be good.  In garbage collector parlance, the item is “rooted,” meaning that the garbage collector can’t collect it.

When an item is removed from the queue, that item’s slot in the array is supposed to be cleared–set to null. Since the item was removed from the queue, it makes no sense to keep a reference to the item around.  And as long as the queue keeps the reference, the garbage collector can’t collect the object.

Problem is, the code that removes items from the queue doesn’t clear the item’s slot in the array.  The result is that stale object references remain in the queue’s backing array until one of the following happens:

  1. An item added to the queue overwrites that slot.
  2. There are no more references to the queue (i.e. it goes out of scope).
  3. The array is resized. This happens automatically as the number of items in the queue grows and shrinks.

There might be other ways, but those are all that I can think of at the moment.  The point is that items “live” in the queue longer than they absolutely have to, which is a potentially serious memory leak.

I say “potentially serious” because it’s not like the queue grows without bound. At least, it shouldn’t. If your program allows for unbounded queue growth, then you have a more serious problem than this memory leak.  That said, with the existence of this bug you have to assume that your queue will always have worst-case memory usage.  That is, for the purpose of computing memory consumption, you have to assume that the queue is always full, even when you know that all items have been removed.

This can be a real problem if you use a queue to hold input items, for example, and a separate collection to hold items that have been processed from the queue. For example, imagine a loop that does this:

QueuedThing queued = GetNextItemFromQueue();
ProcessedThing processed = ProcessThing(queued);
AddProcessedItemToList(processed);

Your expectation is that, within the limits of the garbage collection interval, the amount of memory used is:

(sizeof(QueuedThing) * queue.Count) + (sizeof(ProcessedThing) * list.Count)

But with this bug, you have to design for the worst case memory usage, which is:

(sizeof(QueuedThing) * queue.Capacity) + (sizeof(ProcessedThing) * list.Count)

So if QueuedThing and ProcessedThing are the same size, then your worst case memory usage is double what it should be.

Microsoft says that the fix for this will be in the next version of .NET. Considering that .NET releases are typically every two years or so, that means at least another year before this is fixed.  Until then, the workaround is to be more careful with the way you use ConcurrentQueue (and BlockingCollection, which uses a ConcurrentQueue as its default backing store). In particular, you should be careful to limit the size of your queue and limit the scope in which the queue is active.

Coffee scoop and hillbilly

A few new carvings recently. Debra wanted a new coffee scoop, so I made her two of them. Here’s one. The other looks just like it.

The scoop is carved from basswood, and finished with something called “butcher block conditioner”–a mix of beeswax and food-grade mineral oil. I figured there wasn’t much sense in using an exotic wood for the scoop, as it’s going to get blackened from the coffee. The bowl holds about a tablespoon. The size comparison object is one of the new Presidential dollar coins.

This little hillbilly (about 3½ inches tall) is carved from a maple scrap.  Very hard wood.  I think I’ll stick to softer woods for my caricatures in the future and save the harder woods for things like dolphins and other forms that have fewer fine details.

Comparing ranges is harder than it looks

I’m developing a generic Range class in C# so that I can create comparison ranges.  The idea is that I can replace code like this:

int MinValue = 22;
int MaxValue = 42;
...
if (val >= 22 && val < 42)
{
}

with this:

Range<int> MyRange = new BoundedRange(
    22, RangeBoundType.Inclusive,
    42, RangeBoundType.Exclusive);
...
if (MyRange.Contains(val))
{
}

In isolation it doesn’t look like a win but if you’re doing that range comparison in multiple places or working with multiple ranges, some of which can have an open-ended upper or lower bound, or  bounds that can be inclusive or exclusive, working with a Range type can save a lot of trouble.

Before we go on, let me introduce a little bit of notation.

First, the characters ‘[‘ and ‘]’ mean “inclusive”, and the characters ‘(‘ and ‘)’ mean “exclusive.”  So, assuming we’re working with integral values, the range [1,30) means that the numbers 1 through 29 are in the range.  Similarly, (1, 30] means that the numbers 2 through 30 are in the range.

To specify “unboundedness,” I use the character ‘*’.  So the range [*,255] means all numbers less than or equal to 255.  And, of course, the range [*,*] encompasses the entire range of values for a particular type.  By the way, inclusive and exclusive are irrelevant when unboundedness is specified.  That is, [*,255] and (*,255] are identical ranges.

Range type also allows me to do things like compare ranges to see if they overlap, if one contains the other, etc. And therein lies a problem. Consider the two byte ranges [0,*] and [*,255].

It should be clear that, since the range of a byte is 0 through 255 (inclusive), the two ranges are identical. The first range has a lower bound of 0, inclusive, and is unbounded on the upper end.  The second range has defined bounds of 0 and 255.  As programmers, we can tell by inspection that the two ranges are identical.

Unfortunately, there’s no way to write generic code that can determine that the ranges are equivalent.  Although the primitive types available in C# all have defined maximum and minimum values (Int32.MaxValue and Int32.MinValue, for example), not all types do.   And since I might want to have ranges of strings or other types that don’t have defined bounds, there’s no general way to tell the difference between a range that has no defined upper bound and a range whose upper bound is the maximum possible value for that type.  If I wanted that ability, the only way would be to pass maximum and minimum possible values for each type to the Range constructor–something that I’m not willing to do and might not even be possible.  What’s the maximum value for a string?

The inability to determine that an unbounded range is equivalent to a bounded range makes for some interesting problems. For example, consider the two byte ranges [1,*] and [0,255].  It’s obvious to programmers that the second range contains the first range. But how would you write a function to determine that?

public bool Contains(Range<T> other)
{
    // returns true if the current range contains the other range
}

Since there’s no way to convert that unbounded value into a number, the code can only assume that the upper bound of the first range is larger than the upper bound of the second range. Therefore, Contains will return false.

Whether that’s an error is a matter of opinion.  Logically, having Contains return false in this case is the correct thing to do because “unbounded” is larger than any number.  But it seems like an error in the context of types like byte that have limited ranges, because there’s really no such thing as an unbounded byte.  Rather, byte has a maximum value of 255 and one would expect the code to know that.

As I said, it’d be possible to extend the Range class so that it knows the minimum and maximum values for the primitive types, and give clients the ability to add that information for their own types if they want.  That way, an unbounded upper value for an integer would get  Int32.MaxValue.   The code would then “do the right thing” for the built-in types, and clients could define minimum and maximum values for their own types if they wanted to.

But I wonder if it’s worth going to all that trouble.  Is it more sensible to explain the rules to programmers and expect them to define specific bounds (i.e. not use unbounded) if they think they’ll be working with ranges (as opposed to comparing values against ranges) of primitive types?

It’s all a matter of perception

The story is told of a man who becomes convinced that he’s dead.  At first, his family tries to logic:  “Look, you’re walking and breathing and talking.  You can’t possibly be dead!”  Failing that, they referred him to a psychiatrist who tried the same line of reasoning, again to no avail.  The man is eventually committed to a mental institution, still firmly convinced that he is dead, and daily visits with the doctors have no effect on changing his mind.

After some time, a new psychiatrist is assigned his case.  The new doctor has a new idea, and walks his patient through the medical texts to convince the man of one fact:  dead men don’t bleed.  After weeks of poring over the texts and other relevant information, the man concedes the point:  dead men do not bleed.

The doctor then takes a pin and pricks the man’s finger.  As you would expect, a drop of blood begins to well up in the tip of the patient’s finger.  Looking at it, astounded, the man exclaims, “Hey, Doc!  Dead men do bleed!”

How often do you run into people who, in spite of all evidence to the contrary, continue clinging to their own preconceived notions in much the same way as the man who was convinced that he was dead?

Better yet, have you ever found yourself holding tightly to a particular belief long after you have seen sufficient evidence to prove that you’re dead wrong?

The ability to re-examine and modify (or discard) your beliefs in the face of contrary evidence and admit it is perhaps the most important mark of intellectual maturity.

Update, June 2024:

I had some difficulty with my web hosting and had to rebuild the blog. In the process I had to decide what to do with comments that people made on old posts. I had no way to enter and attribute them correctly, and most I didn’t deem sufficiently insightful to duplicate. However …

Michael Covington noted on this entry:

This is actually a rather tricky and difficult area of epistemology. As Quine pointed out (and I call it Quine’s Law), evidence can compel you to revise your beliefs as a set, but it cannot compel you to revise a *particular* belief. You can always revise some other belief(s) instead. It is very hard to pin down good criteria for what is reasonable. Logic itself does not tell you how to strike the right balance.

Michael also provided a link to his own blog, where he describes a possible approach.

You can (or could, in June of 2024) see those comments on the Internet Archive backup of my blog post:

Free enterprise?

Long ago, I was a staunch supporter of laissez faire capitalism, believing that private enterprise was much more efficient and effective at self regulation than any kind of government intervention.  But my support of laissez faire was based on two assumptions that proved wrong:

  1. Businesses would operate morally.  That is, they would give value for value, and not take advantage of customers, employees, or suppliers.
  2. People could choose who to do business with, and had the means to find out about companies’ business practices.

All too often, one or both of those assumptions turned out to be wrong.  I found that businesses would often operate dishonestly and would make it difficult for others to find out about it.  In addition, at the time I was re-thinking my position (late 80s and early 90s), significant natural barriers existed which prevented the average person from finding out about a company’s business practices.

In theory, government’s role in laissez-faire capitalism is to serve as an arbiter–to supply a system of courts where disputes can be resolved.  But in practice, the game is rigged in favor of the party with the most money.  Rarely can an individual emerge victorious from a dispute with an unscrupulous corporation.

After a lot of internal turmoil, I came to the uncomfortable conclusion that some bit of government regulation is required.  I don’t like that conclusion, because I have a very strong (and I think well-placed) distrust of government.  But when the kids won’t play nice together, somebody has to be the daddy and enforce some civility.  The question, then, becomes “How much government regulation is best?”  My answer for the last 20 years has been, “as little as possible.”

I believed that minimal government regulation would keep companies in line: forcing them to operate more transparently, and preventing the most egregious abuses.  It’s like putting a simple lock on a door to keep honest people honest.  My belief in limited government regulation was based on something that I thought was universally true:  Individuals and corporations will operate in their own self interest, and enlightened self interest is a much better regulator than any government.

I still believe the second part to be true, but the assumption that people will act in their own self interest turned out to be wrong.  Shockingly so.

Some people will lay all of the blame for the credit crisis and the current economic situation on government.  Extremists on one side will say that laws such as the Community Reinvestment Act and its ilk caused the problem.  The other side holds that government didn’t do enough to pump liquidity into the market, or that it didn’t regulate enough.  I’ve seen that debate go back and forth like a tennis ball.

The problem isn’t near that simple.  There’s certainly plenty of blame to be laid on government’s doorstep.  Encouraging (one might say requiring) banks to make high-risk loans was a bad idea that was compounded by failing to exercise any oversight.  Failing to issue any kind of warning or institute any kind of control over high-risk investment vehicles was another huge mistake.

The majority of the blame for the current situation rests, not on government’s shoulders, but on the shoulders of the individuals who made a lot of really stupid mistakes for a very long time.   No, that’s being too charitable.  They didn’t make mistakes.  They consciously chose short-term gain over long-term consequences, most often with the certain knowledge that at some point they would have to pay.  The best recent example of that is Bernad Madoff Investment Securities, although one only has to look at any bank that’s failed or taken government bailout money over the last year to see that the problem was wide-spread.

Plenty of people warned about the dangers inherent in the kind of “investing” that had become popular over the last few years.  It’s inconceivable to me that boards of directors and company CEOs didn’t know that they couldn’t keep it up forever.  And yet they kept trying!  Not only that, but they actively discouraged their employees from warning about the dangers.  I’ve seen several reports of employees being fired because they cautioned against participating in the madness.

Companies and individuals did not act in their own self interest.  They acted like a mob in a frenzy, trying to get as much as they could as quickly as possible, damn the long-term consequences.  I can understand individual employees acting in that manner.  I cannot understand it being condoned and even encouraged by CEOs and boards of directors.  It has shaken my belief in free enterprise.

Understand, banks and financial institutions are not the only ones who acted so stupidly.  Any mortgage broker who made a “stated income loan” or some such, any builder who over-extended himself in the hopes he could make a quick buck before sanity was restored, and anybody else who knowingly acted against his own long-term self interest is equally to blame.  I’ll lump in the Big Three auto makers there, as well.  The financial crisis didn’t wipe them out.  They were doing a fine job of wiping themselves out over the past 10 years or so.

Free enterprise assumes that people will act in their own self interest.  The idea is that what’s good for a business long term is also good for its customers and employees.  But if we can’t count on businesses to operate in their own self interest, then what can we do?  Government regulation can help in some ways, mostly to prevent businesses from defrauding their customers or taking advantage of their employees, but it can’t force businesses to make a profit.  On the contrary:  whether intended or not, much government regulation seems to be aimed at preventing profits of any kind.  In any case, we know that complete regulation is a bad idea that’s been tried and failed many times.

I’ve heard the argument that we haven’t really tried free enterprise:  that any government intervention in business destroys the system such that we can’t say whether or not the system would work.  Do the laissez faire propopents expect us to believe that companies will all of a sudden start acting morally if all regulations are eliminated?  There is no evidence that such a thing would happen, and plenty of evidence to the contrary.

That said, I continue to be highly skeptical of government regulation, and I’m strongly opposed to government control of or even intervention in economic matters.  But events of the last few years have left me almost equally skeptical of private enterprise, particularly large corporations whose boards of directors appear to be more concerned with today’s share price than with running a profitable business.

How do we ensure ethical business practices without stifling the free and fair exchange of goods and services?  Can it be done through private regulation (i.e. market forces)?  If not, is there a “sweet spot” of government regulation that can do the same unobtrusively and efficiently?

Resetting a Nikon Coolpix 4600

My Nikon Coolpix 4600 digital camera stopped working the other day.  It just wouldn’t turn on.  When I hit the power button, the LED on the top would start blinking.  For a while.  The camera must have been doing something, though, because it’d drain a pair of batteries in just a few minutes.

An online search revealed a number of people who had the same problem.  The “solutions” offered usually fell into one of two categories:  send the camera off to be repaired (a hard reset and a firmware upgrade) at a cost of $30 to $100, or pitch the camera and buy a new one.

I finally stumbled onto the answer:  remove the memory card, put in new batteries, and hold down the “Image Review” button while simultaneously pressing the “On” button.  That reset whatever weird mode the camera was in, and now it’s working just fine.  Sure beats either of the recommended solutions.

Adventures in mass storage

We’ve been using a number of different computers as file servers here at the office, but we’re to the point now that we really need some kind of centralized data storage.  It’s one thing to have a single machine storing a few hundred gigabytes of data.  It’s something else entirely to scatter multiple terabytes across four or five machines, and then struggle to remember what’s where.

Last week we picked up two network attached storage (NAS) boxes:  a Thecus N5200, and a Thecus N7700.  The 5200 supports 5 drives and will be used primarily for offsite backup.  The 7700 supports 7 drives and will be our primary (only, hopefully) file server.

Setting these things up turned out to be quite an experience.  Not because of any problem with the Thecus boxes.  No, those things are wonderful, with very good documentation and a nice browser-based administration interface.  We had problems with the drives we bought to put in our fancy new RAID arrays.

Seagate recently released their Barracuda 7200.11, 1.5 terabyte drive.  We managed to get a great deal on the drives (about $110 each), and we picked up enough to populate the NAS boxes, plus a few high-powered machines here.

It turns out that early versions of the drive’s firmware have a bug that causes the drive to freeze and time out for minutes at a time, which in turn causes RAID controllers to think that the drive has failed.  The results aren’t pretty.  Fortunately, there’s a firmware upgrade available.  I downloaded mine from the NewEgg page for the drive.  Look on the right side about halfway down.  (This wasn’t a surprise.  We knew about the problem and about the firmware upgrade before we bought the drives.)

Applying the firmware upgrade turned into quite an experience.  You see, in order to apply the upgrade you need a DOS prompt.  Not a Windows command line prompt.  Once you manage to get a machine set up and booting FreeDOS from diskette or CD (you can’t boot from the hard drive because the firmware upgrade program wants to see only one hard drive), you can run the firmware upgrade.  It takes less than two minutes to boot the machine and apply the update.  Then the drive is ready to go, right?

Silly me.  I upgraded the firmware on five drives, put them in the N5200, and started the thing up.  Surprisingly, the N5200 reported that the drives were 500 GB, not the 1.5 TB that I thought I had.  But the label on the drive says 1.5 TB.  Whatever could be going on?

It turns out that one of the many things you can do in the drive’s firmware is set the size.  Want to turn your terabyte drive into a 32 gigabyte drive?  No problem!  A set of utilities from Seagate called SeaTools (download the ISO and burn to CD, then boot from the CD) includes diagnostics and an interface for setting the drive’s capacity.  One option is “Set capacity to max native”.  For me, SeaTools reports that setting the drive’s capacity failed, and adds “be sure that drive has been power cycled.”  When I turn the machine off and back on, the drive reports 1500.301 gigabytes.  There’s my 1.5 terabyte drive.

After upgrading the firmware on all drives and using SeaTools to set their capacities, I finally managed to get the RAID arrays set up.  The N5200 is running RAID-5, giving us about 5.4 terabytes for our offsite backup.  The N7700 is running RAID-6, giving us about 6.5 terabytes of live data in a single place.  That should hold us for a while.

A couple of notes on the Thecus boxes:

  • Initial setup of the Thecus is a little inconvenient.  The default IP address is 192.168.1.100, so I had to cobble together a network from an old switch and hook my laptop to it.  Once I changed the IP address to fit on our subnet (10.77.76.xxx), setup went quickly.  There might be a way to change the IP address from the front panel.  If so, that would probably be easier than throwing a network together.
  • The N5200 took almost 24 hours to format and create the RAID-5 array with five 1.5 terabyte drives.  The N7700 took about 8 hours to format and create the RAID-6 array with seven 1.5 terabyte drives.  I suppose this is just the result of better hardware and firmware.
  • There must be some magic incantation to configuring the date and time settings.  If I set the date and time, and tell it not to update with an NTP server, everything works just fine.  But if I enable NTP update (manual or automatic), then the time is totally screwed up.  One box was 11.5 hours slow, and the other was a few hours fast.  (As an aside, I have another piece of equipment that insists on reporting the time in UTC, even though I’ve set the time and told it that I’m in the US Central time zone.  I’m beginning to believe that *nix-based servers don’t like me.)
  • The Thecus boxes do way more than just serve files.  We probably won’t use all those features, but others might.  I especially like the support for USB printers, and the built-in FTP server.  With DHCP enabled and machines connected to a switch off the LAN port, one of these things is a  single-box subnet.  I don’t know what kind of traffic will pass from the WAN to the LAN ports on these things, but if it’s fully blocked it’d make an effective home router to connect to a cable modem.

Anyway, we’re in the middle of copying data and retiring or re-tasking some of our old file servers.  This is going to take some time.  A gigabit network is quick until you start copying multiple terabytes. . .

Memory upgrades

It’s been an interesting few weeks here.  We’ve been collecting data much faster than we anticipated, and we’ve had to upgrade hardware.  One thing we’ve had to do is bring several of our servers from 16 gigabytes of RAM to 32 gigabytes.

Memory is surprisingly inexpensive.  You can buy four gigabytes of RAM (2 DIMMs of 2 gigabytes each) for $55.  That’s fine for maxing out your 32-bit machine, or you can bring a typical 64-bit machine to eight gigabytes with those parts for only $110.

Most server RAM is more expensive–about double what desktop memory costs.  In addition, most servers only have eight memory slots, making it difficult or hideously expensive to go beyond 16 gigabytes.  The reason has to do with the way that memory controllers access the memory.  Controllers (and the BIOS, it seems) have to know the layout of chips on the DIMM, and most machines are set up to use single-rank or dual-rank RAM.  A 2-gigabyte DIMM that uses single-rank RAM will have eight (or nine, if ECC) 2-gigabit chips on it.  A dual-rank DIMM will have 16 (18) 1-gigabit chips.  Dual-rank will typically be less expensive because the lower density chips cost less than half of the higher density chips.

The inexpensive 2-gigabyte DIMMs are typically dual-rank, meaning that the components on it are 1-megabit chips.  If you want a 4-gigabyte DIMM, then you have to step up to 2-gigabit chips.  And those are very expensive.  The other option is to buy quad-rank memory, which uses the 1-gigabit chips.  Quad-rank 4-gigabyte DIMMs for servers are currently going for $70 or $80.  Figure $20 per gigabyte.

The only catch is that most older computers’ memory controllers don’t support the quad-rank DIMMs.  I do know that Dell’s PowerEdge 1950 server with the latest BIOS supports quad-rank.  The Dell 490 and 690 machines do not.

If you’re in the market for a new computer that you expect to load up with RAM, you should definitely make sure that it supports quad-rank memory.  If you’re adding memory to an older machine, you might save a lot of money by doing some research to see if you can upgrade the BIOS to support quad-rank.

Burning CDs on Windows Server 2008

A while back I mentioned that I was unable to burn a CD on my Windows Server 2008 box.  At that point I didn’t have time to figure out what was going on.  Today I needed to burn a CD, and had some time to fiddle with it.

On my Windows XP development box, I used ISO Recorder to burn ISO images to CD.  There’s not much to the program:  just right-click on a .iso file and select “Burn to CD” from the popup menu.  It’s simple and it works.  I wish all software worked so well.  Unfortunately, it doesn’t appear to work on my Server 2008 box.  The program doesn’t recognize my CD/DVD burner.

A bit of searching located ImgBurn, a free CD/DVD reader and recorder that works on all versions of Windows, including 64-bit versions.  This is more of a full-featured application than ISORecorder, but still quite easy to use.  It took me no time at all to start the program and tell it to burn an ISO image to CD.

I don’t know why ISORecorder won’t recognize the CD burner on my box.  The program says it works for Vista, so I imagine it should also work on Server 2008.  But it doesn’t and I don’t have the time or inclination to figure out why.  I’ve found ImgBurn, and I’m happy.

Update 2008/12/13

I tried to burn another CD today, and ImgBurn failed to recognize the recorder.  It turns out that I have to run the program with elevated privileges (i.e. “Run as Administrator”).  I didn’t have to do that the first time, because I started ImgBurn from the installation program, which was already running as Administrator.

Also of note:  ImgBurn will not work if Windows Media Player is running.  At least, it won’t on my machine.  Media Player apparently locks or otherwise allocates the optical drive by default.  Perhaps there’s a way to turn that “feature” off, I don’t know.

I suspect that ISORecorder will work, too, if I try it in Administrator mode.  The beauty of ISORecorder is that all I have to do is right-click on a .ISO file, and it will be written to the CD.  But I don’t know how to make that program run with Administrator privileges.

Charlie

We noticed Charlie having some trouble walking about two weeks ago, and the Saturday after Thanksgiving it got bad enough that we had to take him to the vet.  His back legs were working, but not well, and he was whimpering a bit as if in pain.  When that dog starts showing pain, you know there’s something wrong.  After x-rays of Charlie’s spine and a night of observation, we were referred to a specialist.

We went to the specialist on Tuesday of last week.  He did a myelogram, consulted with a radiologist, and diagnosed a ruptured disk or some other type of blockage that was preventing Charlie’s back legs from working fully.  He recommended surgery, as those kinds of injuries don’t typically fix themselves.  The surgery was yesterday (Monday).

Charlie came through the surgery fine.  He’s still at the vet, though, recovering.  Unfortunately, the doctor didn’t find what he was looking for:  no evidence of a ruptured disk or other type of injury to the spinal column that would cause the blockage.  He did, however, see some swelling in the area, which would present as a spinal injury in the myelogram.

The most likely diagnosis now is a fibrocartilaginous embolism (FCE):

FCE results when material from the nucleus pulposus (the gel-like material which acts as a force-absorbing cushion between two vertebrae) leaks into the arterial system and causes an embolism or plug in a blood vessel in the spinal cord. The condition is not degenerative, and therefore does not worsen. FCE is not painful for the pet, but some permanent nerve damage is likely. Roughly half of all patients diagnosed with FCE will recover sufficient use of their limbs.

Searching for “fce dogs” on the Internet will bring up some frightening pages, many of which indicate that the neurological damage is permanent.  After last night’s reading, I was resigned to Charlie being partially paralyzed for the rest of his life.  But after talking with the doctor today and reading some case studies, I’m much more hopeful.  The doctor, based on his experience with about 200 FCE cases, says that there’s a 60 to 70 percent chance that Charlie will recover fully.

It’s unfortunate that he had to go through what turned out to be an unnecessary surgery, but all the tests indicated that the surgery was the proper course of action.  Charlie’s pretty miserable right now, but he’s still relatively young (7 years old), and very strong.  I expect he’ll be recovered from the surgery very quickly, and then we can see about getting some of his mobility back.