In How to time code I showed how to use the .NET Stopwatch to time how long it takes code to execute. Somebody asked me about the maximum resolution of Stopwatch: how short of an interval can it time?
Stopwatch uses Windows’ high performance timer, the frequency of which you can determine by reading the Stopwatch.Frequency field. I’ve never encountered a system on which that frequency is not 100 nanoseconds, but I always check anyway. The ShowTimerFrequency function below reads and displays the frequency.
static public void ShowTimerFrequency()
{
// Display the timer frequency and resolution.
if (Stopwatch.IsHighResolution)
{
Console.WriteLine("Operations timed using the system's high-resolution performance counter.");
}
else
{
Console.WriteLine("Operations timed using the DateTime class.");
}
long frequency = Stopwatch.Frequency;
Console.WriteLine(" Timer frequency in ticks per second = {0:N0}",
frequency);
double microsecPerTick = (float)(1000L * 1000L) / frequency;
long nanosecPerTick = (1000L * 1000L * 1000L) / frequency;
Console.WriteLine(" Timer is accurate within {0:N0} nanoseconds ({1:N} microseconds)",
nanosecPerTick, microsecPerTick);
}
The output when run on my system is:
Operations timed using the system's high-resolution performance counter. Timer frequency in ticks per second = 10,000,000 Timer is accurate within 100 nanoseconds (0.10 microseconds)
So the theoretical best you can do with Stopwatch is 100 nanosecond resolution. That’s assuming no overhead. But what is the actual resolution you can expect?
Let’s find out.
Timing requires that you start the stopwatch, run your code, and then stop the watch. Broken down, it becomes:
Start the Stopwatch
Call executes before the watch is started
Watch is started (reads current tick count)
Return executes after the watch is started
Execute your code
Stop the Stopwatch
Call executes while the watch is running
Watch is stopped (subtracts starting value from current system tick count)
Return executes after watch is stopped
Overhead is the time to return from the Start call, and the time to make the Stop call (before the current value is read). You can get an idea of the overhead by using a Stopwatch to time how long it takes to do a billion Start / Stop pairs, and subtract the time recorded by the Stopwatch that you start and stop. The code looks like this:
static public void ComputeStopwatchOverhead()
{
Console.Write("Determining loop overhead ...");
var numCalls = 1000L * 1000L * 1000L;
// First, calculate loop overhead
int dummy = 0;
var totalWatchTime = Stopwatch.StartNew();
for (var x = 0u; x < numCalls; ++x)
{
++dummy;
}
totalWatchTime.Stop();
Console.WriteLine();
Console.WriteLine("Loop iterations = {0:N0}", dummy);
var loopOverhead = totalWatchTime.ElapsedMilliseconds;
Console.WriteLine("Loop overhead = {0:N6} ms ({1:N6} ns per call)", loopOverhead, (double)loopOverhead*1000*1000/numCalls);
Console.Write("Stopwatch overhead ...");
// Now compute timer Start/Stop overhead
var testWatch = new Stopwatch();
totalWatchTime.Restart();
for (var x = 0u; x < numCalls; ++x)
{
testWatch.Start();
testWatch.Stop();
}
totalWatchTime.Stop();
Console.WriteLine("Total time = {0:N6} ms", totalWatchTime.ElapsedMilliseconds);
Console.WriteLine("Test time = {0:N6} ms", testWatch.ElapsedMilliseconds);
var overhead = totalWatchTime.ElapsedMilliseconds - loopOverhead - testWatch.ElapsedMilliseconds;
Console.WriteLine("Overhead = {0:N6} ms", overhead);
var overheadPerCall = overhead / (double)numCalls; // uint.MaxValue;
Console.WriteLine("Overhead per call = {0:N6} ms ({1:N6} ns)", overheadPerCall, overheadPerCall * 1000 * 1000);
}
This will of course be system dependent. The results will differ depending on the speed of your computer and on if the computer is doing anything else at the time. My system is an Intel Core i5 CPU running at 1.6 GHz, and was essentially idle when running this test. My results, running a release build without the debugger attached are:
Determining loop overhead … Loop iterations = 1,000,000,000 Loop overhead = 566.000000 ms (0.566000 ns per call) Stopwatch overhead …Total time = 30,219.000000 ms Test time = 15,131.000000 ms Overhead = 14,522.000000 ms Overhead per call = 0.000015 ms (14.522000 ns)
If the timer’s maximum resolution is 100 nanoseconds and there is a 15 nanosecond overhead, then the shortest interval I can reliably time is in the neighborhood of 115 nanoseconds. In practice I probably wouldn’t expect better than 200 nanoseconds and if anybody asked I’d probably tell them 500 nanoseconds (half a microsecond) is the best I can do. It’d be interesting to see how those numbers changed for a newer, faster CPU.
We’re, what, 70 years into the “computer revolution?” By the late ’70s, we’d pretty much settled on one of two different character sequences to denote the end of a text file line. Either a single line-feed (LF) character, or a carriage-return/line-feed pair (CRLF). Well, there was the classic Macintosh that used a single carriage-return (CR), but that’s essentially gone: the Mac these days uses the LF.
The history of line endings is kind of fascinating in a geeky sort of way, but mostly irrelevant now. Suffice it to say that by the late ’70s, Unix systems and those descended from it used LF as a newline character. DEC’s minicomputers, and microcomputer operating systems (like CP/M and later MS-DOS) used CRLF as the newline. If you’re interested in the history, the Wikipedia article gives a good overview.
And so it persists to this day. To the continual annoyance of programmers everywhere. There are Linux tools that don’t handle the CRLF line endings and Windows tools that don’t handle the LF line endings. Everybody points fingers and seemingly nobody wants to admit that it’s a problem that would easily be solved if everybody could get together and decide on a single standard.
Having come up with early microcomputers running CP/M in the early ’80s, I actually used a teletype machine as an I/O device. That machine required a carriage-return to return the print head to the leftmost position of a line, and then a line-feed to advance the paper one line. Thus, the CRLF line ending. Printers, too, required CRLF to start printing on the next line. If you just sent an LF then you’d get something like this:
The quick red fox jumped
over the lazy brown dog.
Instead of:
The quick red fox jumped
over the lazy brown dog.
And if you just sent a CR, you’d get the second line printing over the first. (I suppose I could try to figure out how to do that with HTML/CSS in WordPress, or post an image, but I don’t think it’s necessary. I expect you get the idea. The result is a single physical line with overprinted characters.)
I’d sure like to see the industry settle on a single standard. I don’t have a strong preference. LF-only seems like the more reasonable standard, simply because it’s one less character. It’s not like many people talk directly to printers or teletypes anymore: that’s done through device drivers. At this point, the CR in the CRLF line ending is nothing more than an historical remnant of a bygone era. There is no particular need for it in text files.
Microsoft could lead this change pretty easily:
Fund a development effort to modify all of the standard Windows command line tools to correctly handle both types of line endings on input, and provide an option for each command to specify the type of newline to use on output. With the default being LF.
Modify their compiler runtime libraries to intelligently interpret text files with either type of newline. And to output LF-only newlines by default, with an option of CRLF.
Fund a “newline evangelism” group that advocates for the change, writes articles, gives talks, and provides guidance and assistance to developers who are making the switch.
It’d cost them a few dollars over a relatively short period of time, but it would save billions of dollars in lost time and programmer frustration.
A common pattern used when communicating with external services is to retry a call that fails. Stripped of bells and whistles, the typical retry loop looks like this:
result = makeServiceCall(parameters)
numRetries = 0
while !result.success && numRetries < MAX_RETRIES
// insert delay before retry
++numRetries
result = makeServiceCall(parameters)
end while
if (!result.success)
// Raise error.
We can quibble about the specific implementation, but the general idea is the same: make multiple attempts to get a success response from the service, inserting a delay between calls. The delay can be a fixed amount of time, an exponential fallback with jitter, etc., and you can include all kinds of other logic to improve things, but it all boils down to essentially the same thing.
Under normal circumstances, of course, service calls don’t fail, and this logic works very well. But if the service call failure rate increases, some very bad things can happen.
Imagine a critical service for which you have a contract (a service level agreement, or SLA) that guarantees an average 10 ms response time for up to 1,000 transactions per second (TPS). Remember, though, that this is a shared service. There are many other clients who have similar contracts: 10 ms response time for X TPS. Your application calls that service perhaps 900 times per second, on average. There will be brief periods when your call rate will exceed 1,000 TPS, but that’s okay because the service is scaled to handle large amounts of traffic from many clients. Say the service can guarantee that 10 ms response time for a total of 1,000,000 TPS from all of its clients, combined. Short-term bursts of excess traffic from a few clients aren’t a problem.
Even if calls to the service exceed 1,000,000 TPS, the likely result at first will be increased response time: perhaps latency increases by 50% with a sustained traffic increase of 10%, and doubles when traffic is 20% above the configured maximum. The specific breaking point differs for every service, but in general latency increases non-linearly with the call rate.
Clients, of course, won’t wait forever for a response. They typically configure a timeout (often two or three times the SLA), and consider the call a failure if it takes longer than that. Not a problem with this retry logic: just delay a bit and try again.
As I said above, this kind of thing works fine under normal conditions. But in a large system, lots of things can go wrong.
Imagine what would happen if the service starts getting 1,500,000 requests per second: a sustained 50% increase in traffic. Or one of the service’s dependencies can’t meet its SLA. Or network congestion increases the error rate. Whatever the cause, the service’s failure rate increases, or latency increases beyond the timeout value set by clients. Whatever the cause of the service’s distress, your application blindly responds by sending another request. So if your MAX_RETRIES value is two, then you’ve effectively tripled the number of calls you make to the service.
The last thing a service under distress needs is more requests. Even if your application is not experiencing increased traffic, your retries still have a negative effect on the service.
Some argue that services should protect themselves from such traffic storms. And many do. But that protection isn’t free. There comes a point when the service is spending so much time telling clients to go away that it can’t spend any time clearing its queue. Not that clearing the queue is much help. Even after the initial problem is fixed, the service is swamped with requests from all those clients who keep blindly retrying. It’s a positive feedback loop that won’t clear until the clients stop calling.
The retry loop above might improve your application’s reliability in normal operation. I say “might” because most applications I’ve encountered don’t actually track the number of retries, so they have no idea if the retry logic even works. I’ve seen the following in production code:
A retry loop that always made the maximum number of retries, even if the initial call succeeded.
Retry logic that never retried. That code was in production for two years before anybody realized there was a problem. Why? Because the service had never failed before.
Retry logic that actually did the retries but then returned the result of the first call.
Infinite retry. When a non-critical service went down one day, the entire site became inoperable.
As bad as it is that many programmers apparently don’t test their retry logic, even fewer monitor it. In all the applications I’ve seen with retry logic, only a handful can tell me how effective it is. If you want to know whether your retry logic is working, you have to log:
The number of initial calls to the service.
The number of initial call failures.
The total number of calls to the service (including retries).
The number of call successes (including success after retry).
From those numbers, you can determine the effectiveness of the retry logic. In my experience, the percentage of initial call failures to any service under normal operation is less than 1%, and retry succeeds in fewer than 50% of those cases. When a service is under distress and the initial failure percentage gets above about 10%, retry is almost never successful. The reason, I think, is that whatever condition caused the outage hasn’t cleared before the last retry: service outages last longer than clients are willing to wait.
For the majority of applications I’ve encountered, retry is rarely worth the effort it takes to design, implement, debug, test, and monitor. Under normal circumstances it’s almost irrelevant, maybe making the difference between 99% and 99.5% success rate. In unusual circumstances, it increases the load on the underlying service, and almost never results in a successful call. It’s a small win where it doesn’t matter, and a huge liability when it does matter.
If you have existing retry logic in your code, I encourage you to monitor its effectiveness. If, like me, you discover that it rarely provides benefit, I suggest you remove it.
If you’re considering adding retry logic to your code, be sure to consider the potential consequences. And add the monitoring up front.
I ran across this puzzle about 10 years ago. Although it didn’t take me long to come up with the optimum solution, I find the puzzle interesting because it just might be a good interview question for programmers at different levels of experience.
The puzzle:
Given an unsorted array of distinct integers, arrange them such that a1 < a2 > a3 < a4 > a5 < a6 ... So, for example, given the array [9,6,3,1,4,8], one valid arrangement would be [6,9,1,8,3,4]
The obvious solution is to sort the numbers and then interleave them. Put the smallest number at a1, the largest at a2, second smallest at a3, etc. Sorted, our sample array is [1,3,4,6,8,9]. If we interleave the numbers as I describe, the result is [1,9,3,8,4,6]. It’s easy enough to check if that’s right: just replace the commas with alternating < and >: [1<9>3<8>4<6].
Any programmer fresh out of school should be able to come up with that solution and code it, and tell me that the complexity is O(n log n) because of the sort. If a junior programmer supplied that solution to the problem, I’d be satisfied.
But there is an O(n) solution that I’d expect an intermediate or senior engineer to discover if prompted. That is, if they told me the solution was to sort and interleave, I’d ask them if they could think of a solution with lower expected running time.
Given any three successive numbers in the array, there are four possible relationships:
a < b < c
a < b > c
a > b < c
a > b > c
The relationship we desire is (2).
In case (1), the first condition is already met (a < b), and we can swap b and c to give us a < c > b.
In case (2) both conditions are met so we don’t have to do anything.
In case (3), we swap a and b to give us b < a ? c. But we already know that b < c, so if we have to swap a and c to meet the second condition, the first condition is still met.
In case (4), we know that a > c, so if we swap a and b to meet the first condition, the second condition is still met.
Now, add a fourth number to the sequence. You have a < b > c ? d. If it turns out that c < d then there’s nothing to do. If c > d then we have to swap them. But that doesn’t mess up the previous condition because if b > c > d then by definition b > d.
You use similar logic to add the fifth number. You have a < b > c < d ? e. If d > e then there’s nothing to do. If d < e then by definition c < e, so swapping d and e doesn’t invalidate anything that comes before.
That understanding leads to this pseudocode that makes a single pass through the array, swapping adjacent values as required:
for i = 0 to n-2
if i is even
if (a[i] > a[i+1])
swap(a[i], a[i+1])
end if
else // i is odd
if (a[i] < a[i+1])
swap(a[i], a[i+1])
end
You’re probably familiar with the Birthday Paradox, either from your introductory statistics class or from hanging around computer programmers for too long. Briefly, the math shows that the odds of any two people in a group having the same month and day of birth are quite a bit higher than you would intuitively expect. How high? Given a group of just 23 people, the odds are better than 50% that two of those people will share the same birthday. With 50 people in a group, the odds of two sharing the same birthday is above 97%, and by the time you have 60 people, it’s nearly a sure thing.
Testing the birthday paradox
This falls into the realm of what I call, “Math that just don’t feel right.” Nevertheless, it’s true. We can demonstrate it with a simple program that selects random numbers in the range of 0 to 364 until it repeats itself.
class Program
{
const int NumKeys = 365;
const int Thousand = 1000;
const int Million = Thousand * Thousand;
const int NumTries = 10 * Million;
static void Main(string[] args)
{
// List contains number of items to collision for each iteration.
List<int> Totals = new List<int>();
// Do the test NumTries times, saving the result from each attempt.
for (int i = 0; i < NumTries; ++i)
{
int n = DoTest(NumKeys);
Totals.Add(n);
if ((i % 100000) == 0)
{
Console.WriteLine("{0,5:N0} - {1:N0}", i, n);
}
}
// Group the items by their result.
var grouped = from cnt in Totals
group cnt by cnt into newGroup
orderby newGroup.Key
select newGroup;
// Output a table of results and counts, with running percentage.
int runningTotal = 0;
foreach (var grp in grouped)
{
runningTotal += grp.Count();
Console.WriteLine("{0,3:N0}: {1,7:N0} ({2,8:P2}) ({3,8:P6})", grp.Key, grp.Count(),
(double)grp.Count() / NumTries, (double)runningTotal / NumTries);
}
Console.WriteLine("Min items to collision = {0:N0}", Totals.Min());
Console.WriteLine("Max items to collision = {0:N0}", Totals.Max());
Console.WriteLine("Average items to collision = {0:N0}", Totals.Average());
Console.ReadLine();
}
// Random number generator used for all attempts.
static Random rnd = new Random();
static int DoTest(int nkey)
{
HashSet<int> keys = new HashSet<int>();
// Pick random numbers in the range [0..nkey-1]
// until a duplicate is selected.
int key;
do
{
key = rnd.Next(nkey);
} while (keys.Add(key));
return keys.Count;
}
}
The program isn’t as complicated as it looks. I structured it so that you can run the test many times and then aggregate the results. In this particular case, I run 10 million iterations.
The result of each iteration (the count of random numbers selected before a duplicate was found) is saved in the Totals list. When all iterations are done, the code groups the results so that we know how many times each result occurred. Finally, the grouped results are output with a running percentage. Here’s the output from a sample run:
This result agrees with the math. It shows some other interesting things, too.
Look at the how many times the second number selected was a duplicate of the first: 0.27%. And in more than one percent of the iterations, the fifth number selected was a duplicate! All told, the odds of finding a duplicate is 11% with just 10 numbers selected.
You can argue that birthdays aren’t exactly random, and you’d be right. Some days have more births than others. Although those real world differences are significant, they only affect the “birthday paradox” by a few points. Maybe the break even point is really 25 rather than 23.
You can also argue that the numbers returned by System.Random aren’t truly random. Again, you’d be right. You could code up a better random number generator that uses the cryptographic services to generate numbers that are more nearly random, but you’ll find that it won’t have much of an effect on the results.
What does that have to do with hash keys?
Let’s say you’re writing a program that has to keep track of a few million strings and you want a quick way to index them and eliminate duplicates. A fast and easy solution is to use hash codes as keys, right? After all, there are four billion different 32-bit hash codes, and you’re just going to use a few million.
You can probably see where this is going. What’s the likelihood that you’ll run into a duplicate? That’s right, the Birthday Paradox raises its ugly head here. Let’s look at the math and see what it tells us.
What is the probability of getting a collision when selecting one million items out of a pool of four billion (actually, 2^32, or 4,294,967,296)?
According to the Wikipedia article a coarse approximation, which is good enough for our purposes, is:
Those numbers don’t agree exactly with the calculated probabilities in the Wikipedia article or with my experimental results above, but they’re close–within the range of what you’d expect for a coarse approximation.
So, then, about our one million items selected from a field of four billion:
DuplicateProb(4294967296, 1000000) = 1
What?!?!
How can that be possible? The probability of getting a duplicate within the first million items is so close to certainty that the computer calls it 1.0? This has to be a mistake, right?
It happens to be true. Unfortunately, it’s a bit difficult to prove with the birthday test program because Random.Next() only has a range of 0..2^31. I could modify the program to extend its range, but that’s really not necessary. Instead, I’ve run the program for increasing powers of two and present the results here:
Field Size
Min
Max
Average
2^16
4
1,081
316
2^20
21
4,390
1,281
2^30
342
142,952
41,498
2^31
1,036
193,529
57,606
These are the results of 10,000 runs each. Doing 10 million runs of the 2^16 took quite a while, and the numbers weren’t much different. The only number that changed significantly was the Max value: the maximum number of items selected before a duplicate was found.
The table is more interesting when you replace the Min, Max, and Average values with their corresponding powers of two:
Field Size
Min
Max
Average
16
4
10.08
8.30
20
4.39
12.10
10.32
30
8.42
17.13
15.34
31
10.02
17.56
15.81
For every field size, the average number of items selected before a collision is approximately one half the power of two. Or, just a little more than the square root. That holds true for the birthday problem, too. The square root of 365 is about 19.1, and the 50% probability point is 23.
A really quick way to compute your 50% probability point would be to take the square root. That’s a bit aggressive, but it’ll put you in the ballpark.
More interesting is the Max value in the table. If you think in powers of two, then the Max value is approximately two bits more than the 50% value. That is, 2^30 is a billion items. The square root is 2^15, or 32,768. Two more bits is 2^17, or 128K (131,072). The Max value for 2^30, above, is 142,952: close enough to 2^17 for me.
Given the above, what do you think the numbers for 2^32 would be? The average will be around 83,000, and the Max will be close to 300,000.
That’s right, by selecting only 300,000 items from a field of four billion, you’re almost guaranteed to find a duplicate. You have a 1% probability of collision with just 6,000 items, and a 10% probability of collision with 21,000 items selected.
It’s true that hash codes are not purely random, but a good hash function will give you very close to a uniform distribution. The problem is worse if the hash codes aren’t evenly distributed.
In case you haven’t got the point yet, don’t try to use a hash code as a unique identifier. The likelihood of getting a duplicate key is just too high.
I know what you’re thinking: what about making a 64 bit hash code? Before you get too excited about the idea, take a look at the Probability Table from that Wikipedia article. With a 64-bit hash code, you have one chance in a million of getting a duplicate when you’re hashing six million items. For one million items, it’d be about one chance in 10 million. That sounds pretty unlikely, but it really isn’t when you consider how often some programs run.
Granted, if you were only indexing a million items, you’d probably use something like a hash table that can resolve collisions. It’s when you get into very large numbers of items that you begin worrying about the overhead imposed by your indexing method. Using a hash code sounds like a good option, until you discover the likelihood of generating a duplicate. Even with a 64-bit hash code, you have a 1% chance of generating a duplicate with only 610 million items.
Hash codes are great for quickly determining if two items can’t possibly match. That is, if two strings hash to different values, then they positively are not the same. However, hash codes are not a good solution for uniquely identifying items, because the likelihood of two different items hashing to the same value is surprisingly high.
(Originally posted in somewhat different form on 11/21/2016.)
All too often, I run across integer comparison functions that work, but have a latent bug. It’s not a bug that you’ll run into very often but when you do run into it, it’s potentially catastrophic.
A common idiom for comparison is to have a function that, given two integers x and y will return a negative number if x < y, a positive number if x > y, and 0 if x == y. For example:
Compare(1, 2) returns -1 because 1 < 2
Compare(1, 1) returns 0 because 1 == 1
Compare(1, 0) returns 1 because 1 > 0
The correct way to implement such a function is with cascading comparisons, like this:
static int WorkingCompare(int x, int y) { if (x > y) return 1; if (x < y) return -1; return 0; }
A clever programmer might figure out that you can get the same result by subtracting y from x. After all, (1-2) == -1, (1-1) == 0, and (1-0) == 1. So the comparison function becomes:
static int BrokenCompare(int x, int y) { return x - y; }
The aforementioned clever programmer will probably test it with a few mixtures of positive and negative numbers, get the right answers, and drop it into his program. Why not? After all, it’s less code and a single subtraction is going to be a whole lot faster than a couple of comparisons and branches. Right?
Except that it’s broken. It doesn’t always work. Imagine if x = Int.MinValue and y = 1. Or x = int.MaxValue-100 and y = -1000. Integer overflow rears its ugly head and you get the wrong answer! Don’t believe me? Here’s a program to test it.
Compare 1 and 2.
Working comparison: -1
Broken comparison: -1
Compare -1 and -2.
Working comparison: 1
Broken comparison: 1
Compare -2147483648 and 4.
Working comparison: -1
Broken comparison: 2147483644 // Incorrect: -2147483648 is not greater than 4.
Compare 2147483547 and -1000.
Working comparison: 1
Broken comparison: -2147482749 // Incorrect: 2147483547 is not less than -1000
If x is sufficiently negative and y is sufficiently positive, or x is sufficiently positive and y is sufficiently negative, integer overflow will result in an incorrect result. It’s almost a certainty that the clever programmer who wrote that broken comparison function never even considered integer overflow.
“But,” you say, “I know my program won’t be dealing with numbers that big (or small).” Yeah, sure. Not now. But it might in the future. Worse, that comparison function is probably in some class that three years from now will be lifted out of your program into another that doesn’t have the same limits on the input data. Or some other programmer is looking for an example of how to write a custom comparer for his class, sees yours, and copies it to his program. Everything seems fine. Until, a few weeks or months or years later the program runs into this bug and something breaks. Horribly. Perhaps it’s the custom comparer passed to a sorting function and things aren’t sorted correctly. Or, perhaps worse, the sort doesn’t terminate because a > b and b > c, but for some odd reason a < c. I’ve seen something similar happen. Let me tell you that tracking down that bug wasn’t fun.
Part of our job as programmers is to write code that’s bulletproof whenever practical. And I’m not talking bulletproof just now, but also in the future. We can’t foresee all possible scenarios, but code re-use is a very common thing. Sometimes that means that we forego optimization opportunities because the risk of implementing them is too high.
Not that I’ve seen many programs in which the small difference in the speed of an integer comparison function makes a big enough difference that I’d risk implementing broken code. If Iever did find myself in that situation (and there would have to be a very compelling reason), I would certainly put some big scary comment in there, explaining the limitations and warning others not to use the function for general purposes.
I need to write a program that checks a database every five minutes, and processes any new records. What I have right now is this:
void Main()
{
while (true)
{
CheckDatabaseForUpdates();
Thread.Sleep(300000); // 5 minutes, in milliseconds
}}
Is this the best way, or should I be using threads?
I’ve said before that Thread.Sleep is a red flag indicating that something is not quite right. And when a call to Sleep is in a loop, it’s a near certainty that there is a real problem. Most Sleep loops can be re-coded to use a timer, which then frees the main thread to do other things.
In this example, the “other thing” the main thread could be doing is waiting for a prompt from the user to cancel the program. As it stands, the only way to close the program would be to abnormally terminate the process (Control+C, for example, or through Task Manager). That could be a very bad thing, because the program might be in the middle of updating the database when you terminate it. That’s a virtually guaranteed path to data corruption.
I’m not going to show how to refactor this little program to use a timer because doing so is fixing a solution that is fundamentally broken at a higher level.
How can this program be broken?
I’m so glad you asked.
The program is broken because it’s doing a poor job of implementing a feature that already exists in the operating system. The program is occupying memory doing nothing most of the time. It naps, wakes up periodically, looks around, maybe does some work, and then goes back to sleep. It’s like a cat that gets up to stretch every five minutes and maybe bats a string around for a bit before going back to sleep.
Operating systems already have a way to implement catnap programs. In the Linux world, the cron daemon does it. Windows has Task Scheduler and the schtasks command line utility. These tools let you schedule tasks to execute periodically. Using them has several advantages over rolling your own delay loop.
You can change or terminate the schedule without having to recompile the program. Yes, you could make the program read the delay time from a configuration file, but that just makes the fundamentally broken program a little more complicated.
Task schedules are remembered when the computer is restarted, freeing you from having to start it manually or figure out how to put it in the startup menu.
If a catnap program crashes, it must be restarted manually. A scheduled task, on the other hand, will log the error, and run the next time.
The program doesn’t consume system resources when it’s not doing useful work. It’s like a cat that’s not even there when it’s sleeping. (Which would make for a cat that you don’t see very often, and only for very brief periods.)
Computer programs aren’t cats. If a program isn’t doing active work or maintaining some in-memory state that’s required in order to respond to requests, then it shouldn’t be running. If all your program does is poll a resource on a regular schedule, then you’ve written a catnap that is better implemented with the OS-provided task scheduler.
(I don’t recall if I wrote about this before. I really need to resurrect my old blog entries.)
This was an interesting puzzle to work on. Some of the programmers I subsequently explained it to just got that glazed look and figured I was invoking black magic. Or higher math.
Imagine you have a list of structures, each of which consists of a string key, two integer values, and some other stuff. In essence:
struct Foo
string Name;
int key1;
int key2;
// other stuff that's not relevant to this discussion
For this discussion, let’s assume that int is a 32-bit signed integer.
You have a lot of these Foo structures, more than can keep in memory, but for reasons we don’t need to get into you maintain an index that’s sorted by key1 descending, and key2 ascending. That is, in C#:
index = list.OrderByDescending(x -> x.key1)
.ThenBy(x->x.key2);
Except that (again, for reasons we don’t need to discuss) the key has to be a single value rather than two separate values.
Packing two int values into a long is no problem. But doing it so that the result sorts with key1 descending and key2 ascending? When I first heard this proposed I wasn’t sure that it was possible. But there was a glimmer of an idea . . .
What I’m about to discuss depends on something called “two’s complement arithmetic,” which defines how most computers these days work with positive and negative integers. I’m going to illustrate with 4-bit numbers but the concepts are the same for any size integer value, including the 32-bit and 64-bit numbers we work with regularly.
With 4 bits we can express 16 possible integer values. Unsigned, that’s values 0 through 15. Signed, using the two’s complement convention, the values we can represent are -8 through +7. The table below shows that in detail.
When viewed as signed numbers, the high bit (the bit in the leftmost position) is called the sign bit. When it’s set to 1, the number is considered negative.
Now, the problem. We’re given two signed integers, call them key1 and key2, each of which is four bits in length. We want to combine them into a single 8-bit value so that a natural sort (i.e. sorting the single 8-bit quantity) will result in the records being ordered with key1 descending and key2 ascending.
The first thought is to just pack key1 into the high four bits, and key2 into the low four bits, and treat it as an unsigned quantity. For example, imagine you have these records:
If we invert all of the bits in key1, then a natural sort orders the keys in descending order.
It’s not magic at all. Given an understanding of two’s complement arithmetic and a little study, there’s nothing to it. I won’t claim to be the first person to come up with this, but I did develop it independently.
The explanation above is for two 4-bit quantities combined into a single 8-bit quantity, but it works just as well with two 32-bit quantities combined into a 64-bit quantity.
Putting it all together, we come up with this C# function. It should be easy enough to port to any other language that allows bit twiddling:
public static long getKey(int key1, int key2)
{
long key = (uint)~key1; // invert all the bits of key1
key <<= 32;
// flip the sign bit of key2
uint flipped = (uint)key2 ^ 0x80000000;
key |= flipped;
return key;
}
And of course you can write the reverse function to retrieve the two original keys from the combined key.
I’ve been contributing to Stack Overflow for 14 years: pretty much ever since it started. And every year there are Computer Science students who come up with novel ways of screwing up parsing and evaluating arithmetic expressions.
It’s a problem common to all Computer Science curricula: given a string that contains an arithmetic expression (something like 2*(3+1)/4), write a program that evaluates the expression and outputs the answer. There are multiple ways to solve the problem but the easiest as far as I’m concerned is called the Shunting yard algorithm, developed by Edsger Dijkstra. It’s an elegantly simple algorithm that’s very easy to understand and almost trivial to implement once you understand it.
Here’s the funny thing. Computer Science courses often teach postfix expression evaluation (i.e. evaluating “Reverse Polish Notation” like 2 3 1 + * 4 /) as an example of how to use the stack data structure. Then they teach the Shunting Yard and show that by employing two stacks you can turn an infix expression into a postfix expression. (For example, 2*(3+1)/4 becomes 2 3 1 + * 4 /). Then they give the students an assignment that says, “Write a program to evaluate an infix expression.”
The students dutifully go home and write a program that glues the two examples together. That is, the first part converts the infix to postfix, and the second part evaluates the postfix expression to get the answer. Which works. But is kind of silly because a few simple changes to the output method for Shunting Yard let you evaluate the expression directly–without converting to postfix. In my experience, very few students figure this out.
I actually had a job interview where they asked me to write an expression evaluator. I wrote it to evaluate directly from infix, without going through the postfix step. Those white board inquisitions are always weird. At one point as I was furiously scribbling on the white board, the interviewer asked me where my postfix output was? When I told him that I wouldn’t be producing any, he insisted that I must. I don’t argue with an interviewer (more on that in a separate entry) and usually take their advice, but here I told him to just sit tight; that I knew this was going to work.
The post-coding walkthrough was fun. Usually it’s a formality in which I prove to the interviewer that the code I wrote works as intended. The interviewer already knows whether it works or not, because he’s seen the same algorithm written dozens of different times, knows all the different ways it can be screwed up, and knows what to expect from the walkthrough. But this time the interviewer was following along intently as I explained to him how it all worked. A senior software developer at a major company had never before seen the Shunting yard implemented without that unnecessary conversion to postfix. Weird.
I wonder why this oddity exists. Do Computer Science instructors not tell their students that there’s a way to do the evaluation directly? Or do the instructors themselves not know? And then there’s the other question: why would legions of C.S. students every year come to Stack Overflow looking for the answer to a question that’s easily answered with a simple Google search?
Well, okay, I have an answer to the last one. Laziness. It’s easier to post a question asking, “What’s wrong with my code?” than it is to understand how what the code actually does differs from what it’s supposed to be doing.
Some years back I did a lot of research on and experimentation with the binary heap data structure. During that time I noticed that publicly available implementations placed the root of the heap at array[1], even in languages where arrays are 0-based. I found that incredibly annoying and wondered why they did that.
As far as I’ve been able to determine, the binary heap data structure originated with Robert J. Floyd in his 1962 article, “TreeSort” (Algorithm 113) in the August 1962 edition of Communications of the ACM. As with all algorithms published in CACM at the time, it was expressed in Algol: a language with 1-based arrays. The calculation to find the left child of any parent is: childIndex = parentIndex * 2. The right child is (parentIndex * 2) + 1, and the parent of any node can be found by dividing the node index by 2.
When C gained popularity as a teaching language in the 1980s (for reasons that escape me, but I digress), authors of algorithms texts converted their code from Pascal (again, 1-based arrays) to C (0-based arrays). And instead of taking the time to modify the index calculations, whoever converted the code made a terrible decision: keep the 1-based indexing and allocate an extra element in the array.
It’s almost certain that the person who converted the code was smart enough to change the index calculations. I think the reason they didn’t was laziness: they’d have to change the code and likely also change the text to match the code. So they took the easy route. And made a mess.
Programmers in C-derived languages (C++, Java, C#, JavaScript, Python, etc.) learn early on to start counting at 0 because arrays start at 0. If you want an array that holds 100 items, you allocate it: int a[100];, and the indexes are from 0 through 99. This is ingrained in our brains very early on. And then comes this one data structure, the binary heap, in which, if you want to hold 100 items, you have to allocate 101! The element at index 0 isn’t used.
That’s confusing, especially to beginning programmers who are the primary consumers of those algorithms texts.
Some argue that placing the root at array[1] makes the child and parent calculations faster. And that’s likely true. In a 0-based heap, the left child node index is at (parentIndex * 2) + 1, and the parent index of any child is (childIndex - 1)/2. There’s an extra addition when calculating the left child index, and an extra subtraction when calculating the parent. The funny thing is that in Algol, Pascal, and other languages with 1-based arrays, those “extra” operations existed but were hidden. The compiler inserted them when converting the 1-based indexing of the language to the 0-based indexing of the computer.
But in the larger context of what a binary heap does, those two instructions will make almost no difference in the running time. I no longer have my notes from when I made the measurements myself, but as I recall the results were noisy. The difference was so small as to be insignificant. Certainly they weren’t large enough in my opinion to justify replacing the code. If my program’s performance is gated on the performance of my binary heap, then I’ll replace the binary heap with a different type of priority queue algorithm. A paring heap, perhaps, or a skip list. The few nanoseconds potentially saved by starting my heap at index 1 just isn’t worth making the change.
The C++ STL priority_queue, the Java PriorityQueue, the .NET PriorityQueue, and python’s heapq all use 0-based binary heaps. The people who wrote those packages understand performance considerations. If there was a significant performance gain to going with a 1-based binary heap, they would have done so. That they went with 0-based heaps should tell you that any performance gain from a 1-based heap is illusory.
I am appalled that well-known authors of introductory algorithm texts have universally made this error. They should be the first to insist on clear code that embraces the conventions of the languages in which they present their code. They should be ashamed of themselves for continuing to perpetrate this abomination.