Stopwatch resolution

In How to time code I showed how to use the .NET Stopwatch to time how long it takes code to execute. Somebody asked me about the maximum resolution of Stopwatch: how short of an interval can it time?

Stopwatch uses Windows’ high performance timer, the frequency of which you can determine by reading the Stopwatch.Frequency field. I’ve never encountered a system on which that frequency is not 100 nanoseconds, but I always check anyway. The ShowTimerFrequency function below reads and displays the frequency.

    static public void ShowTimerFrequency()
    {
        // Display the timer frequency and resolution.
        if (Stopwatch.IsHighResolution)
        {
            Console.WriteLine("Operations timed using the system's high-resolution performance counter.");
        }
        else
        {
            Console.WriteLine("Operations timed using the DateTime class.");
        }

        long frequency = Stopwatch.Frequency;
        Console.WriteLine("  Timer frequency in ticks per second = {0:N0}",
            frequency);
        double microsecPerTick = (float)(1000L * 1000L) / frequency;
        long nanosecPerTick = (1000L * 1000L * 1000L) / frequency;
        Console.WriteLine("  Timer is accurate within {0:N0} nanoseconds ({1:N} microseconds)",
            nanosecPerTick, microsecPerTick);
    }

The output when run on my system is:

Operations timed using the system's high-resolution performance counter.
Timer frequency in ticks per second = 10,000,000
Timer is accurate within 100 nanoseconds (0.10 microseconds)

So the theoretical best you can do with Stopwatch is 100 nanosecond resolution. That’s assuming no overhead. But what is the actual resolution you can expect?

Let’s find out.

Timing requires that you start the stopwatch, run your code, and then stop the watch. Broken down, it becomes:

  1. Start the Stopwatch
    • Call executes before the watch is started
    • Watch is started (reads current tick count)
    • Return executes after the watch is started
  2. Execute your code
  3. Stop the Stopwatch
    • Call executes while the watch is running
    • Watch is stopped (subtracts starting value from current system tick count)
    • Return executes after watch is stopped

Overhead is the time to return from the Start call, and the time to make the Stop call (before the current value is read). You can get an idea of the overhead by using a Stopwatch to time how long it takes to do a billion Start / Stop pairs, and subtract the time recorded by the Stopwatch that you start and stop. The code looks like this:

    static public void ComputeStopwatchOverhead()
    {
        Console.Write("Determining loop overhead ...");
        var numCalls = 1000L * 1000L * 1000L;

        // First, calculate loop overhead
        int dummy = 0;
        var totalWatchTime = Stopwatch.StartNew();
        for (var x = 0u; x < numCalls; ++x)
        {
            ++dummy;
        }
        totalWatchTime.Stop();
        Console.WriteLine();
        Console.WriteLine("Loop iterations = {0:N0}", dummy);
        var loopOverhead = totalWatchTime.ElapsedMilliseconds;
        Console.WriteLine("Loop overhead = {0:N6} ms ({1:N6} ns per call)", loopOverhead, (double)loopOverhead*1000*1000/numCalls);

        Console.Write("Stopwatch overhead ...");
        // Now compute timer Start/Stop overhead
        var testWatch = new Stopwatch();
        totalWatchTime.Restart();
        for (var x = 0u; x < numCalls; ++x)
        {
            testWatch.Start();
            testWatch.Stop();
        }
        totalWatchTime.Stop();
        Console.WriteLine("Total time = {0:N6} ms", totalWatchTime.ElapsedMilliseconds);
        Console.WriteLine("Test time = {0:N6} ms", testWatch.ElapsedMilliseconds);
        var overhead = totalWatchTime.ElapsedMilliseconds - loopOverhead - testWatch.ElapsedMilliseconds;
        Console.WriteLine("Overhead = {0:N6} ms", overhead);

        var overheadPerCall = overhead / (double)numCalls; // uint.MaxValue;
        Console.WriteLine("Overhead per call = {0:N6} ms ({1:N6} ns)", overheadPerCall, overheadPerCall * 1000 * 1000);
    }

This will of course be system dependent. The results will differ depending on the speed of your computer and on if the computer is doing anything else at the time. My system is an Intel Core i5 CPU running at 1.6 GHz, and was essentially idle when running this test. My results, running a release build without the debugger attached are:

Determining loop overhead …
Loop iterations = 1,000,000,000
Loop overhead = 566.000000 ms (0.566000 ns per call)
Stopwatch overhead …Total time = 30,219.000000 ms
Test time = 15,131.000000 ms
Overhead = 14,522.000000 ms
Overhead per call = 0.000015 ms (14.522000 ns)

If the timer’s maximum resolution is 100 nanoseconds and there is a 15 nanosecond overhead, then the shortest interval I can reliably time is in the neighborhood of 115 nanoseconds. In practice I probably wouldn’t expect better than 200 nanoseconds and if anybody asked I’d probably tell them 500 nanoseconds (half a microsecond) is the best I can do. It’d be interesting to see how those numbers changed for a newer, faster CPU.

Silly API functions

Sometimes I wonder what a programmer was thinking when he came up with a function.  Today’s laugh maker is the Windows API function GetConsoleTitle, used to return the text displayed in the title bar of a console window.  The function prototype is:

DWORD GetConsoleTitle(LPTSTR lpConsoleTitle, DWORD nSize);

You pass it a pointer to a string and a number that says how long the string is.  If the string you pass is long enough, the function will fill your string with the title bar text and return the length of the text.  Simple.  Right?  Don’t I wish.

The documentation says, “The total size of the buffer required will be less than 64K.”  That’s nice to know.  I’d hate to think that somebody would make the window’s title bar text longer than 65,000 characters..

The real kicker is the discussion of the return value:

  • If the function succeeds, the return value is the length of the string copied to the buffer, in TCHARs.
  • If the buffer is not large enough to store the title, the return value is zero and GetLastError returns ERROR_SUCCESS.
  • If the function fails, the return value is zero and GetLastError returns the error code.

So if the buffer I pass isn’t big enough, what do I do?  I guess the only safe way to call this function is to allocate 64K bytes (or is it 64K characters?) for the buffer.  Otherwise I run the risk of the function failing and being forced to allocate 64K anyway.

The GetWindowsDirectory function, by the way, isn’t much better.  Its return value is described thusly:

  • If the function succeeds, the return value is the length of the string copied to the buffer, in TCHARs, not including the terminating null character.
  • If the length is greater than the size of the buffer, the return value is the size of the buffer required to hold the path.
  • If the function fails, the return value is zero. To get extended error information, call GetLastError.

Here I have to check the return value against the length that I passed in order to ensure that the buffer I sent was large enough.

I chose examples from the Windows API, but that’s not the only API that has screwy functions like this.  The standard C library is chock full of similar oddities, as are many Linux libraries that I’ve worked with.

I understand that these particular API functions were written long ago and maybe I can give a little leeway to the programmers who designed them and the managers who allowed them to be published.  What I don’t understand, though, is how new APIs with similar warts get approved.  Is it really so bad to have a separate function that will return the required size?

Removing the GRUB boot loader

I deleted my Lycoris install the other day because I needed the partition for something else.  What I forgot was that the GRUB boot loader configuration file was on that partition.  Oops.  Computer doesn’t boot.  I set the problem aside and headed off to work, confident that I could fix it easily enough when I got home.  I won’t go into details about all the hoops I had to jump through, but I got it fixed.  A few notes in case you find yourself in a similar situation.

  • It takes a very long time to boot from the Windows 2000 installation media (CD-ROM)  
  • The manual recovery process is useless in this situation.  Or it appeared to be.  Windows would spend 10 minutes examining my drives before rebooting, and then GRUB would give me the same error.
  • The Windows Recovery Console, poorly-documented as it is, is the solution.  When you boot the install media (this was my fourth or fifth try), get to the Recovery Console and then execute the two commands fixmbr and fixboot.  fixmbr will give you a dire warning about a non-standard master boot record and that you might make your disk unreadable.  Everything seems to work, though.
  • You can get more information  about Recovery Console from this Microsoft Knowledge Base article.

More on public computers

One more thought on the public computers idea from yesterday.  This is a move that Microsoft would have a very difficult time countering.  Their whole business model is based on licensing software on a per-computer basis.  The XP licensing scheme stores information about the machine on which it’s installed, and will force you to re-register if the hardware changes.  Unless they came up with a thumb drive that had a built-in license key of some kind (and imagine what privacy advocates would have to say about that!), it’d be impossible to use Windows in this manner.  That’s assuming you could get any version of Windows to auto-detect hardware and boot into a usable system without having to install onto the hard drive.  I’m sure Microsoft could cobble something together, but I doubt you could do it with any existing version of Windows right out of the box.

Just a thought.

A built-in HTML editor

I’ve long lamented the lack of a WYSIWYG HTML editor that I could include as a component in a program, and decided today to do something about it.  I’m embarrassed to say that I didn’t have to look far.  The darned thing’s been right under my nose for years.  The Microsoft Web Browser ActiveX control, installed with Internet Explorer versions 4.0 and later, has editing capabilities.  With just a few lines of C# code, I was able to create a program that will load an HTML file and allow me to edit and save the result.  Granted, making a good editor will take a bit more time, but all the important stuff is there.

I’ll eventually write up an article about it.  If you’re interested in exploring before I get around to posting my article, a good place to start is this sample chapter from the book Component-Based Development with C#.

What is Microsoft .NET?

What, exactly, is Microsoft .NET?  That’s a question I’ll be answering over the next six or seven months in my new position.  The local Microsoft Technology Center contracted with my employer, Catapult Systems, for a technical writer.  More specifically, for a programmer who can write.  So for at least as long as this contract lasts, I’ll be working on .NET documentation and training materials at the local Microsoft office.  This pretty much removes me from direct involvement with the Inquisite product that I’ve been working with for the last 3-1/2 years.

I find it ironic that a month after I announced my plans to try moving away from Windows at home (see my entry for November 2), I find myself totally immersed in Microsoft technologies at work.  This should prove interesting.

Pondering a Windows-free workstation

At the beginning of September, about the same time I started working with Linux From Scratch, I began wondering if I could become Windows free at home.  Not for any activist purpose, mind you.  I have no great dislike of Microsoft or proprietary software, nor any great love for Linux or open source software.  Since then, I’ve installed Linux on a test server here, and also on my Celeron 666 machine, and am in the process of learning enough about it so that I can move all of my critical home applications to Linux.  That’s right, I’ll be writing this web site, answering my mail, and surfing the Web with a Linux box as my primary machine.  I haven’t set a time frame, and I’m in no great hurry, but I’m moving that way.

There are several reasons.  After 20 years of MS-DOS and multiple versions of Windows, I’m looking for something new and different.  I’ve been tinkering with Linux for a few years now, and I’ve been having fun learning new things.  If MS-DOS and Windows kept me amused for 20 years, I guess Linux can give me something to think about for at least 5.  More importantly, I’ve been saying for a few years that Linux isn’t yet polished enough for everyday use as a desktop system, but I wonder how much of that is just fear of something new.  I can afford to spend a little time experimenting with the system.  I’ll either prove myself right, or end up with a more useful desktop computer.  Either way, I can’t lose.

As I said, I’m in no big hurry to do this.  As it stands now, I’m still just tinkering with the Linux system, and even after I get my major applications transferred I’ll keep the Windows box around for a while.  Perhaps indefinitely.  However it shakes out, it’s going to be an interesting ride.  Stay tuned.