Stopwatch resolution

In How to time code I showed how to use the .NET Stopwatch to time how long it takes code to execute. Somebody asked me about the maximum resolution of Stopwatch: how short of an interval can it time?

Stopwatch uses Windows’ high performance timer, the frequency of which you can determine by reading the Stopwatch.Frequency field. I’ve never encountered a system on which that frequency is not 100 nanoseconds, but I always check anyway. The ShowTimerFrequency function below reads and displays the frequency.

    static public void ShowTimerFrequency()
    {
        // Display the timer frequency and resolution.
        if (Stopwatch.IsHighResolution)
        {
            Console.WriteLine("Operations timed using the system's high-resolution performance counter.");
        }
        else
        {
            Console.WriteLine("Operations timed using the DateTime class.");
        }

        long frequency = Stopwatch.Frequency;
        Console.WriteLine("  Timer frequency in ticks per second = {0:N0}",
            frequency);
        double microsecPerTick = (float)(1000L * 1000L) / frequency;
        long nanosecPerTick = (1000L * 1000L * 1000L) / frequency;
        Console.WriteLine("  Timer is accurate within {0:N0} nanoseconds ({1:N} microseconds)",
            nanosecPerTick, microsecPerTick);
    }

The output when run on my system is:

Operations timed using the system's high-resolution performance counter.
Timer frequency in ticks per second = 10,000,000
Timer is accurate within 100 nanoseconds (0.10 microseconds)

So the theoretical best you can do with Stopwatch is 100 nanosecond resolution. That’s assuming no overhead. But what is the actual resolution you can expect?

Let’s find out.

Timing requires that you start the stopwatch, run your code, and then stop the watch. Broken down, it becomes:

  1. Start the Stopwatch
    • Call executes before the watch is started
    • Watch is started (reads current tick count)
    • Return executes after the watch is started
  2. Execute your code
  3. Stop the Stopwatch
    • Call executes while the watch is running
    • Watch is stopped (subtracts starting value from current system tick count)
    • Return executes after watch is stopped

Overhead is the time to return from the Start call, and the time to make the Stop call (before the current value is read). You can get an idea of the overhead by using a Stopwatch to time how long it takes to do a billion Start / Stop pairs, and subtract the time recorded by the Stopwatch that you start and stop. The code looks like this:

    static public void ComputeStopwatchOverhead()
    {
        Console.Write("Determining loop overhead ...");
        var numCalls = 1000L * 1000L * 1000L;

        // First, calculate loop overhead
        int dummy = 0;
        var totalWatchTime = Stopwatch.StartNew();
        for (var x = 0u; x < numCalls; ++x)
        {
            ++dummy;
        }
        totalWatchTime.Stop();
        Console.WriteLine();
        Console.WriteLine("Loop iterations = {0:N0}", dummy);
        var loopOverhead = totalWatchTime.ElapsedMilliseconds;
        Console.WriteLine("Loop overhead = {0:N6} ms ({1:N6} ns per call)", loopOverhead, (double)loopOverhead*1000*1000/numCalls);

        Console.Write("Stopwatch overhead ...");
        // Now compute timer Start/Stop overhead
        var testWatch = new Stopwatch();
        totalWatchTime.Restart();
        for (var x = 0u; x < numCalls; ++x)
        {
            testWatch.Start();
            testWatch.Stop();
        }
        totalWatchTime.Stop();
        Console.WriteLine("Total time = {0:N6} ms", totalWatchTime.ElapsedMilliseconds);
        Console.WriteLine("Test time = {0:N6} ms", testWatch.ElapsedMilliseconds);
        var overhead = totalWatchTime.ElapsedMilliseconds - loopOverhead - testWatch.ElapsedMilliseconds;
        Console.WriteLine("Overhead = {0:N6} ms", overhead);

        var overheadPerCall = overhead / (double)numCalls; // uint.MaxValue;
        Console.WriteLine("Overhead per call = {0:N6} ms ({1:N6} ns)", overheadPerCall, overheadPerCall * 1000 * 1000);
    }

This will of course be system dependent. The results will differ depending on the speed of your computer and on if the computer is doing anything else at the time. My system is an Intel Core i5 CPU running at 1.6 GHz, and was essentially idle when running this test. My results, running a release build without the debugger attached are:

Determining loop overhead …
Loop iterations = 1,000,000,000
Loop overhead = 566.000000 ms (0.566000 ns per call)
Stopwatch overhead …Total time = 30,219.000000 ms
Test time = 15,131.000000 ms
Overhead = 14,522.000000 ms
Overhead per call = 0.000015 ms (14.522000 ns)

If the timer’s maximum resolution is 100 nanoseconds and there is a 15 nanosecond overhead, then the shortest interval I can reliably time is in the neighborhood of 115 nanoseconds. In practice I probably wouldn’t expect better than 200 nanoseconds and if anybody asked I’d probably tell them 500 nanoseconds (half a microsecond) is the best I can do. It’d be interesting to see how those numbers changed for a newer, faster CPU.