Type inference and the conditional operator

The C# conditional operator is a kind of shorthand for an if...else statement. For example, this code:

int result;
if (y == 0)
    result = 0;
else
    result = x/y;

Can be rewritten using the conditional operator:

int result = (y == 0) ? 0 : x/y;

Terseness isn’t a goal in and of itself, but very often the conditional operator really does make code more readable.

This morning I was working on code that will produce output in XML or JSON, depending on a command line parameter. I created a delegate, two methods (one for each output type), and code to assign it, like this:

delegate void OutputRecordDelegate(MediaRecord rec);
OutputProcDelegate OutputProc = null;

void OutputRecordXml(MediaRecord rec)
{
}

void OutputRecordJson(MediaRecord rec)
{
}

// code to initialize
if (OutputFormat == "xml")
    OutputProc = OutputRecordXml;
else
    OutputProc = OutputRecordJson;

That all works as expected. It compiles and runs just fine. But this is exactly the kind of code that I think is more readable when written with the conditional operator, like this:

OutputProc = (OutputFormat == "xml") ? OutputRecordXml : OutputRecordJson;

Imagine my surprise when the compiler issued an error message:

Type of conditional expression cannot be determined because
there is no implicit conversion between 'method group' and 'method group'

Now why is that? It looks right!

The problem has to do with type inference. In the statement:

OutputProc = OutputRecordXml;

The compiler uses type inference to convert OutputRecordXml to the proper type: OutputRecordDelegate. It’s as if I had written:

OutputProc = new OutputRecordDelegate(OutputXml);

The compiler has enough information and the smarts to infer the type and do the proper conversion.

But the specification for the conditional operator doesn’t allow that. The rules for type conversion in the conditional operator are a bit different:

The second and third operands of the ?: operator control the type of the conditional expression. Let X and Y be the types of the second and third operands. Then,

  • If X and Y are the same type, then this is the type of the conditional expression.
  • Otherwise, if an implicit conversion (Section 6.1) exists from X to Y, but not from Y to X, then Y is the type of the conditional expression.
  • Otherwise, if an implicit conversion (Section 6.1) exists from Y to X, but not from X to Y, then X is the type of the conditional expression.
  • Otherwise, no expression type can be determined, and a compile-time error occurs.

Since there is no implicit conversion that will convert between the two methods, the compiler issues an error message.

I can rewrite the code to do the explicit conversions:

OutputProc = (OutputFormat == "xml") ?
    (OutputRecordDelegate)OutputXml :
    (OutputRecordDelegate)OutputJson;

Whereas that works, having to do the explicit casts eliminates most of the benefit of using the conditional operator. The simple version (the one that doesn’t work) is, to me, unquestionably more readable than the standard if...else construct. Having to add the casts makes the code a bit awkward, and I see little or no benefit as a result.

Throwing the wrong exception

The .NET WebClient class abstracts away most of the complexity associated with downloading data from and uploading data to Web sites. Once you instantiate a WebClient instance, you can upload or download a page with a single line of code. For example:

var MyClient = new WebClient();
// Download a page
string pageText = MyClient.DownloadData("http://example.com/index.html");
// send a file via FTP
MyClient.UploadFile("http://example.com/uploads/file.txt", "file.txt");

WebClient also has methods for easily doing asynchronous requests so that you can write, for example:

MyClient.UploadFileCompleted += UploadCompletedHandler;
MyClient.UploadFileAsync(new Uri("ftp://example.com/file.txt"),
     "STOR", "file.txt", null);

The file will be uploaded in the background and when it’s finished the UploadCompletedHandler method is called.

As always, the devil is in the details. Documentation for UploadFileAsync says that the method will throw WebException if the arguments are incorrect or if there are errors downloading. For example, if the fileName argument is empty, the method is supposed to throw WebException.

Unfortunately, it doesn’t. This line should throw WebException:

MyClient.UploadFileAsync(new Uri("ftp://example.com/file.txt"), "STOR", string.Empty, null);

Instead, it throws InvalidCastException in a rather odd place, as you can see from this stack trace.

System.InvalidCastException: Unable to cast object of type 'System.ComponentModel.AsyncOperation' to type 'UploadBitsState'.
 at System.Net.WebClient.UploadFileAsyncWriteCallback(Byte[] returnBytes,
 Exception exception, Object state)
 at System.Net.WebClient.UploadFileAsync(Uri address, String method, String fileName, Object userToken)
 at testo.Program.DoIt() in C:\DevWork\testo\Program.cs:line 38

This is a problem because WebClient.UploadFileAsync doesn’t say anything about throwing InvalidCastException. If you’re writing code that handles possible errors, you’re going to allow for WebException, but you’ll ignore InvalidCastException. At least, that’s how you should write the code: only catch those exceptions you know about and are prepared to handle. InvalidCastException thrown by UploadFileAsync is definitely unexpected and is an indication of a more serious error.

Ever curious, and since I already had the .NET Reference Source installed, I thought I’d go see why this happens.

The line that throws the error is in a method called UploadFileAsyncWriteCallback, the first few lines of which read:

 private void UploadFileAsyncWriteCallback(byte[] returnBytes, Exception exception, Object state)
{
    UploadBitsState uploadState = (UploadBitsState)state;

So the value passed in the state parameter must be of type UploadBitsState or of a type that can be cast to UploadBitsState. Otherwise the cast is going to fail. Something, it seems, is passing the wrong type of value to this method.

The code that makes the call is in the UploadFileAsync method. The code in question looks like this:

try
{ 
    // Code that uploads the file.
}
catch (Exception e)
{
    if (e is ThreadAbortException || e is StackOverflowException || e is OutOfMemoryException)
    {
        throw;
    }
    if(fs != null)
    {
        fs.Close();
    }
    if (!(e is WebException || e is SecurityException))
    {
        e = new WebException(SR.GetString(SR.net_webclient), e);
    }
    // This next line causes the error.
    UploadFileAsyncWriteCallback(null, e, asyncOp); }

I took out the actual uploading code, because it’s not really relevant here. The problem is in the exception handler, in particular the last line. The asyncOp that is being passed as the last parameter to UploadFileAsyncWriteCallback is of type AsyncOperation. But we’ve already seen above that UploadFileAsyncWriteCallback expects the last parameter to be UploadBitsState. Oops.

Because this bugs is inside the exception handler, any exception (other than ThreadAbortExceptionStackOverflowException, or OutOfMemoryException) that is thrown in the code will end up causing UploadFileAsync to throw InvalidCastException.

Until the code is fixed, there’s nothing you can do to avoid this bug other than by not calling UploadFileAsync. You can code around it by writing your code to handle InvalidCastException in the same way that it would handle WebException, but understand that doing so will hide other problems. If, for example, something caused UploadFileAsync to throw SecurityException, it’s going to be reported as InvalidCastException, and your code won’t be able to tell the difference. It’s not like there’s an inner exception to look at.

I’ve reported the bug at Microsoft Connect. The number is 675575, WebClient.UploadFileAsync throws InvalidCastException.

Update 2011/07/30

The following is from an email acknowledgement I received from Microsoft Connect:

This appears to be a regression in .NET 4.0 and we will plan to address it in a future release. Note that all of the Async Upload methods are similarly affected.

Powershell

We do a fair amount of batch processing at work, and I have some fairly involved scripts that are combinations of Windows batch files and VBScript. Both languages are wonky in the extreme, and I’ve been uncomfortable working with them. But it’s what I knew I knew four years ago. What were simple scripts early on evolved (as such things are wont to do) over the years until they were large, complicated, and a complete mess.

I resisted replacing those scripts because I thought I didn’t have time to learn a new scripting language. I’d heard of Windows Powershell, but avoided it because I thought it would take too long to convert my scripts. I bit the bullet last week and was pleasantly surprised. It took all of a couple hours to learn enough about Powershell to fully replace the kludge of batch files and VBScript. The result is a cleaner and more robust update process.

Before you write your next batch file, VBScript, or JScript program to automate something on your Windows server, give Powershell a look. Most likely you’ll be able to do what you want more easily, handle errors better, and end up with a much more readable and more easily modified script.

I’ll be the first to admit that I’m not a Powershell expert. But I’ve already been incredibly productive with it. It’s a real command shell that allows access to the entire .NET Framework. Like a .NET interpreter. Very, very cool. A C# or Visual Basic .NET programmer will feel comfortable in Powershell in a matter of minutes.

Very well done and quite useful. Highly recommended.

You are what you read

I’ve often heard, “You are what you eat” It’s true. What you eat and how much you eat accounts for a very large part of your physical development.

What you don’t hear as often (if ever) is that, mentally, you are what you read, watch, listen to, or otherwise experience.

Information theory deals with the quantification of information. A primary concern is entropy, which can be thought of as how much a random value differs from the expected value. In this context, the expected value is based on what is already known. The more the random value deviates from the expected value, the higher the entropy and thus the higher the information content.

Think of it this way. Imagine you’re standing outside in a rainstorm. Your friend approaches and says, “It’s raining.” The information content of that message is almost zero. The only thing you learned is that your friend is a master at stating the obvious, and you probably knew that already. Of course it’s raining!

Regardless of how many people approached to tell you that it’s raining, you wouldn’t learn anything new.

If your primary means of learning about what’s happening in the world is Fox News, Rush Limbaugh, and Glenn Beck, you’re not going to learn anything new. Oh, to the extent that you’re “keeping up with world events,” you’ll learn things. But that’s just trivia. The same thing will happen if your only source of news is The Huffington Post or  Michelle Malkin, or whatever narrowly-focused outlet you subscribe to.

If you depend on  just one or a very few sources for your news, the information content of the news that you consume will be incredibly low. Why? Because you’re going to hear exactly what you expect. If Congress passes a new jobs bill that the President signs, you already know what Rush Limbaugh is going to say about it. If the President signs a bill that reduces corporate income taxes, you already know what The Daily Kos is going to say about it.

If you hear exactly what you expect, then you’ve received no new information. You’re not learning anything.

The only learning that matters much is that which modifies your world view. Reading the same old opinions expressed on new topics only reinforces your current world view. If you want to learn something, you have to seek out information from sources that will present world views that differ from yours. Sure, some of them are bunk. Most of what Glenn Beck has to say is bunk, too, but that doesn’t stop millions of lazy people from agreeing with him because they can’t be bothered to seek out new sources of information and put forth the effort to evaluate it.

I’m not talking about changing fundamental beliefs. But all of us hold minor beliefs that are based on incorrect or outdated information. We believe things without understanding how we arrived at those beliefs, and then we hold onto those beliefs in spite of obviously correct conflicting evidence.

There’s nothing wrong with strong opinions. Without a strong opinion, it’s impossible to develop a strong argument for or against anything, and it’s impossible to devote your full energy to any pursuit. But those opinions must be weakly held, subject to examination and revision at any time based on new information. A truly wise person has strong opinions that are weakly held–the exact opposite of what partisans or tribalists of all stripes advocate.

There are incredibly intelligent people who are not particularly wise. They continually express just one point of view (typically in the political arena, but sometimes in others) blindly, pointing out the virtues of their side and the faults and foibles of the other side, but acknowledging neither the virtues of the other side nor the faults of their own side. At best, these people are unaware of their own ignorance. At worst, they’re intentionally trying to mislead or manipulate you. Either way, they are not credible sources of information.

More Windows file caching

Serendipity is an odd thing. Shortly after I wrote my previous entry, I stumbled across a possible solution to the problem while I was reading comments on an unrelated blog post. I spent a little time this morning checking it out.

According to Microsoft Technet, the LargeSystemCache registry entry controls file caching behavior. This key, which is stored at HKEY_LOCAL_MACHINE/CurrentControlSet\Control\Session Manager\Memory Management, “[s]pecifies whether the system maintains a standard size or a large size file system cache, and influences how often the system writes changed pages to disk.”

There are two possible values: 0 and 1. Here’s what the documentation has to say:

ValueMeaning
0Establishes a standard size file-system cache of approximately 8 MB. The system allows changed pages to remain in physical memory until the number of available pages drops to approximately 1,000. This setting is recommended for servers running applications that do their own memory caching, such as Microsoft SQL Server, and for applications that perform best with ample memory, such as Internet Information Services (IIS).
1Establishes a large system cache working set that can expand to physical memory, minus 4 MB, if needed. The system allows changed pages to remain in physical memory until the number of available pages drops to approximately 250. This setting is recommended for most computers running Windows Server 2003 on large networks.

LargSystemCache works in concert with the Size value specified in HKLM\SYSTEM\CurrentControlSet\Services\LanmanServer\Parameters. The documentation goes on to say:

Option settingLarge System Cache valueSize value
Minimize memory used01
Balance02
Maximize data throughput for file sharing13
Maximize data throughput for network applications03

That sounds promising. The registry settings on my Windows Server 2008 system were set for file sharing, with a LargeSystemCache value of 1.

Unfortunately, changing the settings had no visible effect on the machine’s behavior when reading large files from the server. I changed LargeSystemCache to 0, set Size to 1, and restarted the computer. When it came back up, I verified the registry settings and then started copying a 60 GB file from the server to my workstation.

I opened Internet Explorer on the server and began surfing the Web. As expected, memory usage increased on the server as the file copy progressed. What wasn’t expected, though, was that when memory filled, Internet Explorer become unresponsive, as did other programs. This is the same behavior I saw when LargeSystemCache was set to 1.

I tried all of the documented values for Size, rebooting the machine after each try, and found no difference in caching behavior.

It’s possible that the LargeSystemCache value is no longer used. After all, the linked TechNet article was written for Windows Server 2003. A blog entry from February 2008, WS2008: Upgrade Paths, Resource Limits & Registry Values, shows the value as “Not Used,” although what that means is unclear. In addition, the last comment on this thread indicates that the value is not used in Server 2008.

I have no way to test whether the registry entries work as advertised in Windows Server 2003. My testing indicates, though, that they have no effect in Windows Server 2008.

Windows file copy bug revisited

Operating systems use file caching to prevent having to read commonly-accessed information from the disk repeatedly. Depending on the situation, the operating system might keep the most recently read stuff in memory, or it might keep the most commonly accessed stuff in memory. Either way, the system dynamically adjusts how much memory it uses for caching. The idea is that if your programs aren’t using the memory, the OS can use it for cache with the understanding that if your programs need it, the cache gets flushed.

It’s kind of like using your neighbor’s covered parking place when he’s away, with the understanding that when he comes back, the space is his again.

A good caching scheme can make a big difference in the speed of disk access. Main memory is slow compared to the CPU, but the disk drive is glacial. If you can avoid hitting the disk for data that’s already been read, you’re way ahead.

A poor caching scheme can result in slower disk access, and an idiotic caching scheme can bring your computer to a halt. In at least one instance, Windows has an idiotic caching scheme that is in a very real sense a security vulnerability.

I’ve mentioned several times the idiotic file caching bug that causes Windows to come to a screeching halt when copying large files. I did a bit more research and testing, and I can say with some certainty what’s happening, although I don’t know all the internals of why. Here’s what appears to be happening.

  1. Program requests the first block of the file.
  2. The server reads the first block into memory, and sends it to the client.
  3. Steps 1 and 2 are repeated many times, and each time the server saves the sent data in memory.
  4. Over time, memory begins to fill with data that the server is caching. I can’t tell if the server is reading ahead in order to have the next block ready to send, or if it’s holding on to data already sent, just in case somebody else might want to read it.
  5. When the cache fills free memory, the operating system starts looking for other memory to use. It starts paging idle programs to virtual memory. Then active programs. And then, as bizarre as it may seem, it looks like the operating system starts paging the cache to memory!

In the past, I thought that Windows was reading ahead, caching parts of the file that hadn’t yet been sent to the client. But my test results point more towards the other conclusion: that Windows is buffering things that it’s already sent. I freely admit that I could be wrong, as I haven’t yet been able to say with certainty which of the two is true. Read-ahead caching at least makes some limited amount of sense. Paging a disk cache to disk would be especially idiotic, but it would definitely explain why the system’s responsiveness continues to degrade after memory fills up. Still, I can’t prove that it’s actually doing that.

Without some serious kernel-level debugging, I don’t have any way to determine exactly what is being cached, but it mostly doesn’t matter. The result is the same: Windows pages out working programs and active data in favor of the file cache. Ultimately, this makes the server unusable.

It doesn’t take an exceptionally large file by today’s standards in order for this to happen. For example, trying to copy a 50 gigabyte file from a server that has 16 gigabytes of memory will exhibit this behavior. I’ve experienced this problem on Windows XP, Server 2008, and Windows Server 2008 R2.

This is a security vulnerability because a very large file in a shared directory is an invitation to a denial of service attack. All a user has to do is start copying that file. In short order, the file server’s memory will fill up, it will start swapping, and will stop serving requests. Worse, because the machine becomes non-responsive, there will be no way to see what’s causing the problem or to disconnect the offending computer. The system administrator is left with the options of 1) trying to find and disconnect the perpetrator at the other end; or 2) reset the file server. The first can be very difficult in a large organization, and the second will cause data corruption if there are any files open for writing.

Understand, a user doesn’t have to be a “black hat” in order to perpetrate this “attack”. He could be reading a file that he is authorized to read. The operating system is the culprit here.

What causes this?

In the past, I thought that the problem was caused by the CopyFile and CopyFileEx API functions, which the standard Windows copy commands (COPYXCOPY, and ROBOCOPY) call to do their jobs. Whereas it’s true that those two functions do exhibit the problem, it’s not limited to those two functions.

It turns out that any program that reads through a large file can trigger this problem. Any program that uses the CreateFile API function to open a network file can cause this to happen. And CreateFile is what ends up being called by the runtime libraries of almost every Windows programming language. Certainly the C++ I/O subsystem calls it, as does the .NET runtime. If you’re writing C# code that accesses very large files across the network, you will see this happen.

I don’t have any way to say for certain what’s going on inside of Windows. That is, I don’t know the internal mechanism that causes this idiotic caching behavior. But I do know that calling CreateFile with default parameters will trigger it, and CopyFile and CopyFileEx appear to call CreateFile with default parameters.

What can you do about it?

From a user’s standpoint, the only option is to find some other program to copy files. TeraCopy works well as a replacement for copying with Windows Explorer. I’ve verified that it does not trigger the caching problem. Although TeraCopy has a command line interface, it just starts the GUI with the parameters you give it. Because it creates a new window, it doesn’t block the command interpreter. As a result, TeraCopy is useless in scripts. At least, I don’t see how to make it work well in a script.

I’ve heard good things about FastCopy, but I haven’t tried it yet. It might be a good replacement for the Windows tools. I don’t know whether FastCopy triggers the caching problem, but I strongly suspect that it doesn’t. I will know more once I test it. Be forewarned: the program has an unusual command line syntax.

Programs that need to copy large files can call CopyFileEx, and pass the new (as of Windows Vista and Server 2008) COPY_FILE_NO_BUFFERING flag in the last parameter. As the documentation for CopyFileEx says:

The copy operation is performed using unbuffered I/O, bypassing system I/O cache resources. Recommended for very large file transfers.

It really does work. If you have a way to call the native CopyFileEx API function, then the copy file problem is solved. C and C++ programmers are in the clear. I’m building a replacement for the .NET File.Copy method, which will allow you to include this parameter. It’s not an ideal solution, but it’s not terribly painful, either. I should have that code available soon.

The bigger problem is how to handle file streams in general. As I said above, any program that reads large files across the network can trigger the bad caching behavior on the file server. This common pattern (in C#), for example, can cause the file server to lock up:

    using (var reader = new StreamReader(@"\\server\data\bigfile.txt"))
    {
        // read lines from the file
    }

This code can cause the problem, too:

    using (var reader = new BinaryReader(File.Open(@"\\server\data\bigfile.bin")))
    {
        // read binary file here
    }

I’ve never run into the problem using StreamReader, but then I’ve never had to stream a 50 gigabyte text file. I have, however, been tripped up by this problem when using BinaryReader.

The good news for C and C++ programmers and others who use native file I/O (i.e. CreateFileReadFile, etc.) is that there’s a solution to the problem, although that solution is somewhat inconvenient. The bad news for .NET programmers is that there is no solution that doesn’t involve unsafe code and some serious mucking about with native memory allocations.

Programmers who call the Windows API functions directly can pass the FILE_FLAG_NO_BUFFERING flag to CreateFile. As the documentation says:

The file or device is being opened with no system caching for data reads and writes. This flag does not affect hard disk caching or memory mapped files.

There are strict requirements for successfully working with files opened with CreateFile using the FILE_FLAG_NO_BUFFERING flag, for details see File Buffering.

The important parts in the File Buffering article have to do with the alignment and file access requirements. Specifically, the buffer size has to be a multiple of the volume sector size and should be aligned on addresses in memory that are integer multiples of the sector size. The second requirement is hardware dependent. So your system might work okay if the alignment is off, but other systems might crash or corrupt data.

So any time you call ReadFile, the address you specify in the lpBuffer parameter must be properly aligned. You can’t pass an arbitrary offset into the buffer like &buffer[15]. You have to do your own buffering. It’s slightly inconvenient, but it’s easy enough to create a wrapper with a buffer and the requisite logic to handle things properly.

.NET programmers are stuck. .NET file I/O ultimately goes through the FileStream class, which knows nothing about the FILE_FLAG_NO_BUFFERING option. There is a FileStream constructor that allows you to specify some options, but the FileOptions enumeration doesn’t include a NoBuffering value. And if you examine the source code for FileStream, you’ll see that it has no special code for ensuring the alignment of the allocated buffer, and no facility for ensuring that reads are done at properly aligned addresses.

You can fake it by creating your own NoBuffering value and passing it to the constructor, but there’s no guarantee that it will work because NET programs can’t control memory allocation alignment. At best, it would work sometimes.

It appears that the only solution for .NET programmers is to create an UnbufferedFileStream class that uses VirtualAlloc to allocate the buffer, and has code to properly handle reads and writes. That’s a big job, especially if you want it to serve as a plug-in replacement for the standard FileStream.

Even if you were to write that proposed UnbufferedFileStream class, it wouldn’t handle all cases, and some others would be a bit inconvenient. For example, the code to open a StreamReader would be:

    var ufs = new UnbufferedFileStream(filename, bufferSize);
    using (var reader = new StreamReader(ufs))

You can even put it on a single line, since StreamReader will automatically close the base stream:

    using (var reader = new StreamReader(new UnbufferedFileStream(filename, bufferSize)))

But other things would still be a problem. For example, .NET 4.0 introduced the File.ReadLines method, which returns an enumerator that reads the file a line at a time. It makes quick work out of reading a text file:

    for (var line in File.ReadLines(@"\\server\data\bigfile.txt"))
    {
        // do something with the line
    }

But that will undoubtedly exhibit the bad caching behavior if the file you’re reading is very large, and there’s nothing you can do about it because there’s no corresponding method (at least, not that I know of) that will enumerate the lines in a stream that you pass to it. You can certainly create such a method, but then you have to remember to use it rather than the standard File.ReadLines.

This is a serious bug

I don’t know if this bug still exists in Windows 7. I find very little information about it online, although there are some reports of users experiencing it when copying relatively small files (four gigabytes, for example) from computers that have only one or two gigabytes of RAM. I expect the problem to get increasingly worse as file sizes and hard drive capacities increase.

I’m somewhat surprised that the bug exists at all. I can’t imagine building an operating system that swaps program code and active data to disk in favor of a file cache. That seems like an incredibly stupid thing to do. But I understand that bugs happen. From what I can glean, though, this bug has existed at least since Windows 2000, which would be more than 10 years. And that really surprises me. You would think that in 10 years they’d be able to come up with a solution that would provide the benefits of file caching without the disastrous side effect of bringing a server to its knees with large files.

There is some chatter online about being able to set caching limits, but I haven’t seen anything like a definitive solution, or even anything that looks promising. Certainly nothing I’ve tried solves the problem. If there is a configuration setting that will prevent this problem, it should be the default. Users shouldn’t have to go digging through tech notes in order to disable an optimization (file caching) that by default is the equivalent of a ticking time bomb.

Lock the taskbar!

I’ll admit, I have my oddities. As a programmer and writer, I’ve always found vertical screen space to be precious. I like to see lots of lines of code or text. So anything I can do to increase the number of lines that I can see is a benefit. Years ago, when Windows 95 first came out, I learned that I could move the taskbar from the bottom of the screen to the right side. (You can also move it to the left or top, if you like.) Doing so gave me a little bit more vertical screen space and also made the taskbar less crowded. You can fit a whole lot more programs on the vertical taskbar than on the horizontal taskbar. I often have a whole lot of windows open. In addition, you can see all of the items in your quick launch and task tray. And I get a laugh whenever somebody sits down at my machine and says, “Where’s the taskbar?” I love confusing people.

All in all, I think the taskbar to the right is much better than on the bottom–especially on this wide-screen monitor.

Except for one thing. For some reason, the Windows XP installation on my laptop insists on resetting the taskbar width to the default whenever I reboot. The default width is too narrow, so the reboot sequence for me always includes the three-step process of unlock the taskbar, adjust the width, lock the taskbar. When I’m installing updates or making modifications to the machine, it becomes incredibly monotonous. Lock the taskbar. Lock the taskbar. I wonder if The Clash would be terribly upset by a parody . . .

Salad claws

Salad claws are implements used for tossing or serving salad. They come in many different forms.

These claws are about four inches long and three inches wide, and are carved from two different pieces of yellow poplar. As you can see, one piece has a lot more of the greenish color that the wood is sometimes known for.

I cut the lighter-colored one out on my bandsaw several months ago. Actually, I cut two from the same board but then I made a big mistake carving one of them. I finally got around to cutting out another (this time with a coping saw), and finished both with a lot of hand carving and sanding.

Debra used them for the first time last night. Her report: they feel good in the hand and do a nice job tossing and serving the salad. The only drawback is that they’re a little too short, so she ended up getting salad dressing on her fingers.

I’ll make the next set a bit longer.