Dolphin pattern

Several people have asked if I have a dolphin carving pattern. I have some dolphin cutouts that somebody gave me a while back, so I just traced the large one on a piece of paper and scanned it. This is the same basic pattern I used for my dolphin experiment a while back.

That’s a reduced copy of the pattern. Unfortunately, I have lost access to the full-sized image. If you want a larger figure than this pattern provides, you can probably enlarge the pattern image without losing any critical information.

The full sized pattern gave a figure that’s approximately 8.5 inches long and 4.8 inches tall. For a figure that size, you’ll want two inch stock.

To resize, simply load the file into your favorite photo editor and tell it how large you want the figure. A figure half that long (4.25 inches) will require one inch stock.

If you use this pattern, I’d sure like to see the result. Leave a comment or send me email with a picture.

Happy carving!

File.Exists is only a snapshot

When writing programs for a multitasking operating system (i.e. Windows, Linux, OS X, any modern operating system), you have to realize that resources are shared, and that the machine’s state can change at any time without your knowledge. It seems obvious, but many programmers don’t fully understand the ramifications. For example, I see code like this distressingly often:

    if (File.Exists(filename))
    {
        File.Delete(filename);
    }

The programmer apparently wants to avoid trying to delete a non-existent file thinking that File.Delete will throw an exception if the file isn’t there.

The problem with the code is that it doesn’t work as expected. It reduces the likelihood that the program will call File.Delete with the name of a file that doesn’t exist, but it doesn’t eliminate the possibility. Why? Because some other thread (perhaps in a different process) could delete the file between the time File.Exists returns true, and when the program tries to delete the file. In addition, this code doesn’t take into account the possibility that some other thread could have the file open, or any number of rare but certainly not impossible things like a disk crash.

What I find amusing is that in the most common case the call to File.Exists isn’t even necessary. The typical use here is to delete a file that’s in the application’s data or output directory. The programmer erroneously believes that File.Delete will throw an exception if passed the name of a file that doesn’t exist. But File.Delete fails silently in that case. It will throw an exception if the path is invalid, but in most cases the program couldn’t get this far if the path was invalid. The programmer is trying to avoid a problem that doesn’t exist, and ignoring a much bigger potential problem.

When you’re working with shared resources, any information you have about that resource is a snapshot. It tells you what the state was at the time you checked, but that state can change at any moment, potentially just milliseconds after you check. File.Exists tells you that the file existed when you checked. It doesn’t guarantee that the file will still exist when you try to delete it.

The correct way to delete a file is:

    try
    {
        File.Delete(filename);
    }
    catch (DirectoryNotFoundException)
    {
    }
    catch (IOException)
    {
    }
    // etc.

Of course, you should handle only the exceptions you’re expecting and know how to handle, and depending on your application you might want to do something other than just swallow them. File.Delete can throw a number of different exceptions, and it’s up to you to decide which ones to handle.

Another very common mistake is checking to see if a file is locked before trying to open it for reading. For example:

    if (CanOpenFileForReading(filename))
    {
        using (FileStream f = File.OpenRead(filename))
        {
            // do stuff
        }
    }

The implementation of CanOpenFileForReading is irrelevant. This can’t work because some other thread could lock the file or delete it before your thread gets around to opening it. I’ll grant that the window of vulnerability is pretty small, but it’s there. And I’ve seen programs written this way fail precisely because another thread managed to sneak in and lock the file during that time. The right way to open a file is:

    try
    {
        using (FileStream f = File.OpenRead(filename))
        {
            try
            {
                // do stuff
            }
            catch (Exceptions that occur while doing stuff)
            {
            }
        }
    }
    catch (Exceptions that occur when trying to open the file
    {
    }

There are those who have some quasi-religious resistance to using exceptions in this way. To them I say: Get over it! We can debate the proper use of exceptions ’til the cows come home, but the .NET runtime library in general and the File class in particular use exceptions for error handling. If you try to code around that, you’re going to write buggy and fragile code that just won’t work in the face of errors. Use the API in the way that it’s intended.

The elusive desk gecko

A member of the local wood carving club, Central Texas Woodcarvers Association, made a gecko a few years ago and brought it to one of the club meetings. He recently posted the pattern on the club’s site, so I thought I’d give it a try.

This particular figure began life as a chunk of sycamore that I’d picked up somewhere; probably down by the creek. I squared off the log on the bandsaw, traced the Gecko Pattern on the top and one side, and then went back to the bandsaw to make the cutout. The pattern, by the way, is a huge image. I resized it to about 8 inches. The finished piece is about 7.25 inches long.

I did all the carving on this with a single knife, then sanded it with 150 grit sandpaper. The sycamore is a pleasure to work with. I find it much nicer to carve than basswood, and I just love the grain, especially after finishing.

The finish is two coats of Watco Danish Oil. I’ll let that cure for three or four days and then spray two or three thin coats of Deft Satin polyurethane.

Overall, I’m very happy with the way this piece turned out. The only thing I’d like to change is the feet. I found it very difficult to get any kind of rounding on the feet or toes. With the grain running the length of the figure, those thin feet and toes are very brittle, and shaping them was very difficult. I did a little rounding with the sandpaper, but I’ll have to figure a better way for the next time.

Armadillo in mesquite

I cut out the pattern for this piece two or three years ago. Then it sat on a shelf, mocking me. I had originally planned to carve it with knives and gouges, but that mesquite is just so dang hard that I gave up on the idea after a couple of tries.

This was a bit of an ambitious project for me because I created my own pattern by downloading pictures of real armadillos, drawing an outline, and then cutting it out. It’s also a little larger than most of my carvings: measuring about 8 inches long (including the tail), 3.5 inches tall, and 2 inches wide.

The tail is a little thicker than it should be, but I was a little worried about it breaking.

My only real disappointment with the piece is the legs, which I should have either detailed, or made more stylized. As they are, they kind of detract from the rest of the piece.

I met the primary goal, though: make an armadillo-shaped carving that shows off the beauty of the mesquite.

Solution to yesterday’s puzzle

The subrange puzzle I posted yesterday came from this Stack Overflow post. My original response contained the algorithm below, in a slightly different form. It also contained a bug in that it didn’t handle the case where the end of the longest subrange was also the end of the array. An oversight.

My solution is based on the insight that in a sorted array, if a[start]+r >= a[end], then a[start+1]+r is, too. Provided that start <= end.

So if I start at the beginning of the array with start = end = 0 and move the end index forward until a[0]+r < a[end], I’ve found my first “longest subrange.” Then, I increment start. At this point, I know that the values a[1] through a[end-1] satisfy the conditions. They have to because the array is sorted, meaning that a[1] >= a[0].

I don’t have to check the values between a[1] and a[end-1] to see if they’re in the new subrange. I already know that they are.

The algorithm does a single pass through the array, incrementing end until a[start]+r < a[end], then incrementing start until a[start]+r >= a[end].

The result is that the algorithm never makes more than 2*n comparisons, where n is the number of items in the array. The actual number of comparisons will be approximately (2*n)-length, where length is the length of the longest run. The algorithm also uses constant extra space regardless of the size of the array or the value of r.

Here’s the code in C#.

    int FindRange(int[] a, int r, out int length)
    {
        int start = 0;
        int maxIndex = 0;
        int matchLength = 0;
        length = 0;
        for (int end = 0; end < a.Length; ++end)
        {
            if (a[end] > a[start] + r)
            {
                matchLength = end - start;
                if (matchLength > length)
                {
                    length = matchLength;
                    maxIndex = start;
                }
                // move forward until within range
                do
                {
                    ++start;
                } while (a[end] > a[start] + r);
            }
        }
        // Final check, from a[start] to end of array
        matchLength = a.Length - start;
        if (matchLength > length)
        {
            length = matchLength;
            maxIndex = start;
        }
        return maxIndex;
    }

This problem is somewhat similar to the Maximum subarray problem from Jon Bentley’s Programming Pearls, which I read and worked through the puzzles almost 20 years ago. Although I didn’t recognize it as such at the time, I suspect that the time I spent solving that problem (and other puzzles) back then helped me to solve this one in just a few minutes. The disturbing thing, though, is that I at first said that the algorithm was O(n2) in the worst case. I had to clean up the code before I realized that it was O(n). Weird.

One might ask why I spend time on these puzzles. There are several answers. First, I find them interesting. Some people like Sudoku, crossword puzzles, and chess. I like algorithm puzzles. They help me relax and they also sharpen my skills. And some of the puzzles are real-world problems. Many times in my career I’ve used some of the techniques learned from doing these puzzles to drastically reduce the running time of a critical application.

A little programming puzzle

Here’s an interesting puzzle I ran across yesterday.

Say you have a sorted array of integers, for example:

{1, 2, 5, 7, 8, 8, 9, 12, 14, 15, 16, 17, 18, 19, 20}

You want to find the longest subrange, start..end, where a[start]+r >= a[end], and r is a constant positive integer. (Or, to put it another way, (a[end] - a[start]) <= r).

So given the array above, if r is 2, then the longest subrange would be {7, 8, 8, 9}. If r is 5, then the longest subrange would be {14, 15, 16, 17, 18, 19}.

Write a function that, given an array and a range, will return the starting index and the number of items that match. For example, in C#:

    int FindSubrange(int[] a, int r, out int length)
    {
        // do calculation
        ...
        // return values
        length = maxLength;
        return maxRangeStartIndex;
    }

This function would return index 3 and length 4 for the first example above. For the second example it would return index 8 and length 6.

The array can be of arbitrary length (potentially billions of items) and can contain any valid integer values. It is sorted, but might contain duplicate values.

There is an obvious O(n2) solution. Can you do better?

I’ll post my solution in a day or two.