A hidden benefit of health insurance

When I went the self-employment route three years ago, Debra took on the job of finding us some private health insurance. We finally settled on a plan that costs us about $300 per month and has a pretty high annual deductible–over $5,000. We also opened a health savings account so that our medical bills are paid with pre-tax dollars. At the time, $300 per month was about double what I’d been paying for much more comprehensive coverage (dental, eye care, short-term disability, prescription drugs, and $20 co-pays) through my employer. And that large deductible was something of a concern. Health care is expensive, right?

I had some tests done early last year. My doctor had quite a job convincing me that they were necessary, considering how much they cost. One test in particular was quoted at $4,000, and I cringed at how much that would deplete our health savings. But I really did need it, so I went ahead and scheduled the appointment.

The way our insurance works, we have the doctor or lab bill the insurance for any work, and the insurance company takes care of figuring out whether we’ve met the deductible. If we haven’t, the insurance company refuses the charge and sends it back to the doctor or lab, who in turn bills us. Since the test was done early in the year, I knew that I’d have to pay the entire amount because we hadn’t met our deductible.

Imagine my surprise when I got a bill for about $1,400, rather than the $4,000 that was quoted. How is that possible? I eventually discovered that there are two prices for health care: the retail price and the insurance negotiated price. Since I knew that I would be paying for the test myself (at least up to my deductible), I asked how much it cost and was quoted the full retail price. But I ended up paying the price that my insurance company had negotiated. The difference ended up to be about eight months’ worth of premiums. Not a bad deal.

Something similar happened recently. I had my annual physical a couple of weeks ago, and when it was done I was presented with a bill for $500. Since I have a high deductible, the doctor asked that I pay half up front and they would bill me for whatever the insurance company didn’t cover. We heard back from the insurance company the other day. Their negotiated rate for the physical is something under $100. So the doctor ends up owing me money.

If it sounds confusing to you, join the club. I’m pretty surprised that the insurance company can get a $4,000 test for $1,400, take 80 or 90 percent off the cost of a physical. It makes me wonder what the real cost of health coverage is. If you assume that care providers won’t negotiate rates that prevent them from making a profit, then it’s hard to understand why they’d set their retail rates so much higher. Although to tell the truth I can’t imagine how the doctor can make a profit when he spends an hour talking with and examining me and only gets $50 for his trouble.

In any case, you might want to reconsider if you’ve decided that you can’t afford health insurance. It’s quite possible that by purchasing even a very modest plan (and there are plans for quite a bit less than the $300/month that Debra and I have), you’ll end up making back the cost of the premiums by paying the reduced rates that your insurance company negotiates with providers.

Calling static methods as instance methods

I was half asleep yesterday when I fired off an email to my co-workers about the String.IsNullOrEmpty method in the .NET runtime. One thing I said in the mail was:

I find it curious that IsNullOrEmpty is a static method rather than an instance method. Why can’t I say:
if (!someString.IsNullOrEmpty())

The obvious answer, as one coworker pointed out, is “because it can be null.” Calling a method on a null reference will end up throwing an exception. Embarrassing, I know. I’ll blame lack of sleep.

But I got to thinking about it later and I end up asking the same question. Let me explain.

C# 3.5 introduced extension methods, which allow you to add functionality to existing classes without having to create a derived type, recompile, or otherwise modify the original types. The idea is that you create a static class to contain the extension method, and inside that class create a static method that has a special syntax. For example, the following would create a WordCount extension method for the String class:

namespace ExtensionMethods
    public static class MyExtensions
        public static int WordCount(this String str)
            return str.Split(new char[] { ' ', '.', '?' }, StringSplitOptions.RemoveEmptyEntries).Length;

If you then reference the namespace (with a using statement) in the code that uses this method, you can get the word count for a string (assuming that ‘s’ is a string) by writing:

int wc = s.WordCount();

You can also call it like this:

int wc = MyExtensions.WordCount(s);

Extension methods are certainly cool, and LINQ depends on them to do what it does. Although extension methods can be very confusing and even dangerous if used indiscriminately, they’re very useful.

Think about what happens inside the compiler in order to make this possible. When the compiler sees that special “this” syntax in the extension method’s definition, it has to make an entry in its symbol table that says, in effect, “ExtensionMethods.MyExtensions.WordCount is an instance method of type System.String.” Then, when it sees that call to s.WordCount, the compiler emits code equivalent to ExtensionMethods.MyExtensions.WordCount(s).

Now let’s go back to the static String.IsNullOrEmpty method. It is defined in the String class as:

static Boolean IsNullOrEmpty(String)

The only thing missing is the “this” syntax on the parameter. If that syntax existed, we’d be able to call IsNullOrEmpty on a null string reference, because the compiler would have converted it into a static method call. But it wouldn’t take any changes to the .NET runtime in order to make this possible. The compiler should be able to handle it.

That is, the compiler should be able to see that you’re calling a static method using instance method syntax, and fix it up accordingly. The compiler has all the information it needs in order to do that. And with the already-existing extension method machinery built into the compiler, I can’t see that this would be especially difficult to implement. Why wasn’t it?

Debugging a stalled engine

Mower Engine

You’re looking at the front end of my riding lawn mower. After 12 seasons of use and my own maintenance, I finally had some problems with it that were better handled by a repair shop. I got the machine back on Wednesday after they replaced the starter and ring gear, did a tune-up, adjusted the front end alignment, and fixed the mower clutch. My plan Saturday morning was to mow the front yard before heading to the office. I got about halfway through when the engine died.

An engine requires four things to operate: compression, fuel, air, and spark. It was pretty clear that I had compression, as the engine had been operating and I would have known if it had eaten a valve or thrown a rod. I took off the air cleaner and cranked the engine to verify that fuel was getting into the engine, and since the air cleaner was clean I figured that wasn’t a problem. That leaves spark.

The most common reason for no spark is a bad spark plug. I removed the plug, connected it, and cranked the engine. Sure enough, no spark. After a quick run to the shop for a new spark plug, put it in and … still no spark.

It didn’t take much debugging to find the problem.

In the picture above, you can see a black wire leading from the cowling up front, under the starter, and to a wiring harness that’s just below the gas tank. But when I first looked at it, that wire was running behind the starter. The mechanic at the repair shop must have got the wire caught there when replacing the starter. The result, as you can see in the picture below, was quite impressive:


Being wedged between the starter and the block flattened the wire, and the heat of engine operation finally melted enough of the insulation that the wire grounded against the engine block, and the engine died.

While I was working on this, I got to thinking how much what I was doing resembled what I do every day when debugging code: determine the possible causes of the failure and then check them off, one by one. In a very large number of cases, debugging really is that simple if approached logically. Granted, intermittent problems in multi-threaded programs can be much, much more difficult to find, but even they finally succumb to the same basic method: form an hypothesis to explain the behavior, formulate a test case to verify the hypothesis, and then correct the code that is in error.

I’ve found that most problems I run into in life are best handled in this manner. It’s certainly more productive than flailing around without a clue.

HashSet Limitations

Version 3.5 of the .NET runtime class library introduced the HashSet generic collection type. HashSet represents a set of values that you can quickly query to determine if a value exists in the set, or enumerate to list all of the items in the set. You can also perform standard set operations: union, intersection, determine subset or superset, etc. HashSet is a very handy thing to have. Simulating the same functionality in prior versions of .NET was very difficult.

I’ve made heavy use of HashSet in my code since it was introduced, and I’ve been very happy with its performance. Until today. Today I ran into a limitation that makes HashSet (and the generic Dictionary collection type, as well) useless for moderately large data sets. It’s a memory limitation, and how many items you can store in the HashSet depends on how large your key is.

I’ve mentioned before that the .NET runtime has a 2 gigabyte limit on the size of a single object. Even in the 64-bit version, you can’t make a single allocation that’s larger than 2 gigabytes. I’ve bumped into that limitation a few times in the past, but have been able to work around them by restructuring some things. I thought I was safe with the HashSet, though. Even with an 8-byte key, I figured I should be able to store on the order of 250 million items. I found out today that the number is quite a bit lower: a little less than 50 million. 47,995,853 to be exact. After I figured out what was causing my problem, I verified it with this program:

static void Main(string[] args)
    HashSet<long> bighash = new HashSet<long>();
    for (long i = 0; i < 50000000; ++i)
        if ((i % 100000) == 0)
            Console.Write("r{0:N0}", i);
    Console.Write("Press Enter");

The program throws OutOfMemoryException when it tries to add the 47,995,853rd (or perhaps the 47,995,854th) item, because it’s increasing the capacity of an internal data structure and that data structure exceeds 2 gigabytes.

If I reduce the size of the key to 4 bytes (a .NET long is 8 bytes), then I can add just a little less than 100 million items before hitting the limit. Let’s think about that a little bit.

50 million keys of 8 bytes each should take up about 400 megabytes. 100 million keys of 4 bytes each should take up about 400 megabytes. I realize that there’s some overhead in a hash table to deal with collisions, but five times is excessive! I can’t imagine a hash table implementation that has an overhead of five times the total key size. And yet, that’s what we have in .NET.

It’s bad enough in today’s world, where a machine with 16 gigabytes of RAM can be had for under $2,000, that we have to deal with the 2-gigabyte-per-object limitation in .NET. But to have the runtime library’s implementation of a critical data structure squander memory in this way is too much.

Any workaround is very painful. We’ll have to write our own hash table implementation that allocates unmanaged memory and mucks around with pointers in unsafe code. We’re old C programmers, so that’s not beyond our capabilities. But it sure makes me wonder why I selected .NET for this project. In the process, we’re going to lose a lot of the functionality of Dictionary and of HashSet.

I can’t be the only one running up against these kinds of limitations. 10 years ago, a data set of 100 million items may have been considered large. Today 100 million is, at best, moderately large. There are plenty of applications that work with billions of items and today’s computers have the capacity to store them all in RAM. We damned well should be able to index them in RAM using modern tools.

I hope the .NET team is working on a solution to the 2-gigabyte limit, and I’d strongly suggest that they take a very close look at their hash table implementation.


A sample text widget

Etiam pulvinar consectetur dolor sed malesuada. Ut convallis euismod dolor nec pretium. Nunc ut tristique massa.

Nam sodales mi vitae dolor ullamcorper et vulputate enim accumsan. Morbi orci magna, tincidunt vitae molestie nec, molestie at mi. Nulla nulla lorem, suscipit in posuere in, interdum non magna.