About the Moon: a podcast, a book, and a curiosity

Everybody loves the Moon. But we, people in general, don’t know much about it. We know it follows an approximately monthly cycle and it has something to do with the tides. Oh, and 50 years ago for a brief period we sent some guys up there to check it out.

That’s really too bad. The Moon is fascinating. The unusual nature of the Earth/Moon relationship could very possibly be a major reason life developed on Earth. Or, put another way, without that particular relationship, it’s quite possible that life would not have developed here.

The Radiolab podcast The Moon Itself explores that relationship, and much more. Listening to it got me interested me enough to buy and read Rebecca Boyle’s excellent Our Moon: How Earth’s Celestial Companion Transformed the Planet, Guided Evolution, and Made Us Who We Are. It’s well researched and well presented, giving us a look at the science and the mysticism of the Moon.

Highly recommended.

By the way, it’s odd that of all the objects in the sky, we attach the definite article (“The”) to only three of them. The Sun, the Moon, the Earth. But we don’t say, “the Jupiter” or “the Titan,” “the Betelgeuse.” Even more curious is that, as far as I can tell, we use the Moon’s name to reference satellites of other planets. As in “Titan is Saturn’s largest moon.” Kind of like “She’s my girl Friday” refers to Robinson Crusoe’s servant and friend.

I also wonder if the “Holy Trinity” that I learned about in Catechism class can be traced back through earlier writings and associated with the celestial trinity of Sun, Earth, Moon. There’s certainly a whole load of mysticism associated with Sun and Moon, and I can easily envision a path for that evolution. Whether it can be documented is another matter entirely.

And another aside. I really like that Radiolab provides a text transcript of their episodes. While I’m driving I’ll hear something that I want to go back to later. It’s a whole lot easier to search the transcript than try to find a particular word or sentence in a 40+ minute audio file. Although I wonder if that will be the case for much longer . . .

Resurrecting the blogs

I’ve mentioned before that my blog went offline during the COVID lock down. I had an issue with my hosting provider that I didn’t handle promptly, and I ended up losing some data. My 100 Birds Project site went offline at the same time. I re-started the blog although until recently haven’t been posting much, and I kept saying “one of these days” I’ll restore the content that I lost.

In the last week, I’ve managed to get the 100 Birds Project almost completely restored. All that’s left is the bird index and then checking all the internal links for consistency. That work should be done before the end of next week. That was the easy one, with only about 110 pages.

Restoring my blog is going to be a much longer project. I haven’t counted them, but I wouldn’t be surprised to find that there are 500 or more blog entries from 2007 to 2020. There’s no real rush, though. I could easily restore one month of blog entries a day and probably be done in a few months. Certainly by the end of the year. Likely, I’ll do it considerably quicker than that.

And then maybe I’ll consider converting my old blog entries (from October 2000 until March 2007) to WordPress. That’ll be a much longer project because I was … more prolific early on.

Estimating with π

I recently made a hat out of hardware store twine, a project I’ll write about here. Soon, I hope. In creating the hat, I of course needed to know how large to make it. So I had Debra measure my head. I then divided that by π to get the diameter for the hat layout. Nothing magical there, but it got me to thinking about hat sizes.

In the US, hats are sized in 1/8 inch increments. Like 6-5/8 or 7-3/4. The number is computed by measuring the circumference of the head in inches 1 centimeter above the ears, and dividing by π. An alternate method is to take the circumference in centimeters and divide by 8.

What?

An example. Debra measured my head at 23 inches. Divided by π gives 7.32. That’s easy enough. Also, 23 inches works out to 58.42 centimeters. Divide that by 8 and you get … 7.30. Close enough for hat sizing! (So, yeah, my hat size is halfway between 7-1/4 and 7-3/8)

That works because to convert from inches to centimeters you multiply by 2.54. And (2.54 * π) is equal to 7.97. Or, showing my work . . .

Hat size = (circumference in inches) / π
Circumference in centimeters = (circumference in inches) * 2.54
Hat size = (circumference in centimeters) / (π * 2.54)

It’s kind of a cool estimating trick. You can estimate the diameter of any circle by dividing the circumference in centimeters by 8. The problem of course is the unit conversion: measure in centimeters and get the result in inches.

When I need to estimate with π, I use a two-step process. First, I multiply by 3. Then I add five percent. For example, a circle with diameter of 7 inches has a circumference (π * D) of 21.99 inches. Estimating, (7 * 3) = 21, and five percent of 21 is 1.05, giving me an estimated circumference of 22.05 inches. The mental arithmetic of (times three plus five percent) is a whole lot easier for me than (times three point one four). Also, a simple (times three) is often enough to tell me what I need to know.

Or, if I’m dividing by π, I first divide by 3, which again is often enough to give me the answer I need. If I need a bit more precision I’ll subtract five percent. The result will be about one half percent less than the actual number. Again, close enough for a quick estimate.

I don’t often have to work with π2 when estimating, but a similar trick works pretty well. Just multiply by 10 and then subtract 10 percent. That is, π2 is about 9.87. So if you multiply by 10 and then subtract 10%, you’ll be about 3% high. Again, that’s close enough for a quick mental estimate.

Good riddance to rubbish writing

In Gen Z never learned cursive. The effects of this are more widespread than you think (which references an October 2022 Atlantic article, Gen Z Never Learned To Read Cursive), the author describes the potential effects of discontinuing cursive writing instruction. In truth, the first article mentioned just expands a bit on one or two of the points made in the Atlantic article.

A note before I continue. Cursive is actually any form of connected writing. What most Americans refer to as “cursive” is the Palmer Method script that was introduced in the early 20th century, that most of us learned to fear and loathe, and many of us (myself included) can’t write legibly today. In the text below, I use “cursive” to mean that Palmer Method and its variants.

The Palmer Method, by the way, is a refinement of the Spencerian Method of writing that was introduced in the 1840s. Both are teaching a method of writing that’s optimized for 17th century pens. A primary consideration is keeping the pen on the paper because every time the pen was lifted and then placed back onto the paper, the ink tended to blot. These methods go to sometimes absurd lengths to keep the pen on the paper, even when doing so is less than optimum. The introduction of the inexpensive mass-produced ball point pen eliminated the ink blotting problem and thus the necessity to always keep the pen on the paper. But the script lives on.

The big complaint about no longer teaching cursive is that “the past is presented to us indirectly.” That is, because historical documents are written in script that students aren’t taught to read and write, they have to depend on transcription if they want to read the documents. There’s also some complaint that kids won’t be able to read letters from grandma.

And it’s true: without instruction and practice, cursive writing is difficult or impossible to read. Fortunately, it takes about a week to learn how to read it. Maybe a bit of practice after that, but if a child knows how to read block printing, learning to read the cursive script that’s been the mainstay of elementary writing instruction for a century is very easy.

In addition, the cursive script that’s been taught since the early 1900s isn’t even the same script that was used in our founding documents. The Declaration of Independence, perhaps the most oft-cited historical document cited in this argument, was written in Round hand, a script that originated in England in the 17th century. Learning to read modern cursive writing certainly helps in reading that document, but it still requires a bit more puzzling out.

Point is, the important historical documents all have been rendered in a typewritten font. There’s little or nothing to be gained for most people in reading the originals. There’s the incredibly weak argument that scholars will have to be taught cursive like they’re taught Elizabethan, Medieval, or ancient Cuneiform script. My response is, “duh.” Digging deeply into the past requires specialized skills. The ability to read cursive today is just a little bit more important than knowing how to hitch a horse to a buggy.

The “past is presented indirectly” argument for learning cursive writing just doesn’t fly. It’s as relevant to day-to-day life as the idiotic argument that those who can’t drive a car with manual transmission are somehow not “real” drivers. Ranks right up there with the, “when I was a boy, we had to trudge through three feet of snow, in the dark, uphill both ways to use the bathroom” stories we laugh about.

To be fair, there are benefits of learning cursive. First, it’s much faster than block printing, and there is that ability to read letters from grandma. It helps in developing fine motor skills, and studies show that students with dyslexia find learning cursive helps with the decoding process. However, people with dysgraphia are hindered by being forced to write in cursive.

Although I haven’t had to depend on my ability to write in cursive for at least 40 years, I do agree that many people must be able to write legibly and more quickly than they can with just block printing. But we should be teaching kids a different method. There are other connected writing systems that are easier to learn, faster, and easier to write legibly than the antiquated loopy Palmer Method that was designed 100 years ago for writing with a quill pen: a technology that was obsolete before the Palmer Method was introduced. (The ball point pen was invented in the 1880s.) For example, Getty-Dubay Italic, introduced in 1976, is much easier to read and write, and doesn’t require the loops and other forms that were necessary in older scripts to prevent the ink from blotting. There are other, similar, writing styles that are optimized for today’s writing instruments.

Whether those newer methods carry with them the benefits of developing fine motor skills and assistance to dyslexic students is an open question. I think it likely. I also suspect that it would have less of a negative impact on those with dysgraphia because the letter forms are so similar to the printed letter forms. As for letters from grandma, that’s becoming irrelevant, too. At 62, I’m older than a lot of grandmas and I’ll tell you from experience that a lot of them write cursive as well and as often as I do: illegibly and almost never. Letters from grandma that are written in cursive will pretty much cease to exist in my lifetime.

It’s true that trying to write those modern scripts using a 17th century quill pen would be as disastrous as taking a horse and buggy on an Interstate highway. Sometimes you have to discard old things and embrace the unquestionable benefits of new technology. Unless you like having to check for black widow spiders after trudging through the snow to the outhouse.

Shingles is no fun

I am suffering through my first experience with shingles. No, not the roofing kind. Shingles is a disease characterized by a painful skin rash with blisters that occurs in a localized area. It’s caused by the chickenpox virus. If you’ve had chickenpox, then likely the virus is lying dormant in your nerve cells, where it will come back to haunt you when you’re older.

It’s … unpleasant. And from what information I’ve been able to gather, what I’m currently experiencing is a rather mild case.

According to the Wikipedia article, about one-third of all people will experience at least one attack of shingles in their lifetime. Shingles is more common among older people. If you live to 85 years old, your chance of having at least one attack is about 50%. The article says that fewer than 5% will have more than one attack.

Risk of death from shingles is very low: between 0.28 and 0.69 deaths per million. Combined with the apparently low likelihood of a second attack, some would say that I’m being overly cautious in contacting my doctor to get the vaccine. My response to them is, “tell me that again after your first experience.”

If you’re over 50, talk to your doctor about getting the shot. The vaccine doesn’t guarantee that you won’t have an attack, but studies show that the likelihood is reduced something like 90% over three or four years, and if an attack does occur it will be less severe. Worth repeated trips to the doctor, in my opinion. My “mild case” was quite uncomfortable for a couple of days and it’s going to itch like crazy (like poison ivy) for the next week or so. I’ve known people who’ve had it much worse. Believe me when I tell you that you do not want to experience it.

Wedding bowls

Back in May I decided to make a couple of bowls for my niece’s wedding in late July. Finding wood to make bowls from is no problem: we lost an Arizona Ash tree in the Icepocalypse of 2021, and there’s plenty of that lying around. Originally I had intended to make just one bowl, but then decided on two.

Although I did turn a few bowls when I was a member of TechShop, I don’t have a lathe and have no real plans to get one. I carve my bowls with an angle grinder and Foredom power carver.

I started with an end grain bowl: just a piece from one of the larger limbs, about 9″ in diameter and about 4″ tall.

The bowl blank is mounted on my holding jig: a piece of galvanized pipe attached to floor flanges screwed to the bowl and to the work bench. I’ve tried many different ways of holding a piece when working on it, and this is the one I like best. It’d be different if I were doing a more detailed carving, but for carving bowls this is fantastic. It holds the piece securely and I can move around it. The pipe flange is attached to the top of the bowl.

I’ve seen some people hollow the bowl first, before shaping the outside. I don’t understand how they can do that. I always shape the outside first. Here it is after rough shaping.

One of the things I’ve struggled with is smoothing and sanding after the bowl is carved. Smoothing the outside of the bowl can be a very big pain once it’s in the shop. What I do is rough carve the outside, then smooth it with 36, 60, 80 and if I can, 120 grit flap wheels while it’s still on the holding jig. About half the time I can’t get to 120 grit because it burns the wood. It depends on the type of wood and the moisture content. Sometimes I’ll do the 120 grit sanding by hand while it’s still mounted.

Wood scorched with 120 grit flap wheel
Hand sanded to 120 grit

This bowl was a bit difficult because it was too small to hollow with the angle grinder. I had to resort to my Foredom power carver and a 1″ ball burr. Hollowing took a while. Sanding took even longer, and I had to fill a couple of voids with crushed Malachite. But it turned out really nice.

The second bowl is from the same limb, right next to the round bowl. This one is oblong, about 9 inches wide and perhaps 16 inches long. Depth is about 3 inches. Here it is, sitting on the holding jig before I started carving.

Sometime between when I carved the round-ish bowl and when I started on this one, I remembered that I had an Arbortech Mini-Turbo attachment for my angle grinder. That made hollowing this bowl a lot easier.

To hold the bowl in place, I flattened the bottom with an electric hand plane and then glued it to a piece of wood paneling I had left over from when we remodeled the house. I then clamped the paneling to the workbench. This works well, but separating the bowl from the paneling when done is a pain in the neck. I’ve since experimented with several other options, including using less glue (holds well, but still difficult to remove), double-sided tape (works very well and easy to remove), and gluing the bowl to a piece of stiff cardboard (holds well and easier to remove than paneling or plywood). My preferred method is the double-sided tape, but sometimes it doesn’t hold and I have to resort to the cardboard and glue.

One thing I haven’t tried yet is blue painter’s tape and superglue (put a piece of tape on the bench and on the bowl, and add a few drops of superglue to hold them together). I’ve seen that used to good effect in other woodworking situations. I expect it’ll hold well, and removal should be trivial.

I did initial smoothing with a 36 grit flap disc on the angle grinder, then hand sanded to 220 grit, starting at 60 and working my way up. I was pleasantly surprised at the figuring in the wood. I had to get it wet and snap a picture when I had finished the 60 grit pass. It’s just so dang pretty.

Sanded to 60 grit

There were a few small cracks in the bowl that I filled with crushed turquoise. The result is, I think, quite beautiful.

Finished bowl

Finish on both bowls is Half and Half, a product of the Real Milk Paint company. It’s a 50/50 mixture of pure tung oil and an orange solvent. People I trust say that it’s the best spoon finish, so I figured it should be great for bowls, too.

Starting over

It’s been several years since I actively maintained this blog. I went through a bad time, mentally, prior to and just after the COVID pandemic during which I inadvertently lost much of what I had posted over the years. The text of the blog entries is in a MySQL database file that I can probably read if I put some effort into it. The pictures are gone.

That said, I’m finally coming out of my mental haze, and trying to get back on a more even keel. Time to re-start the blog.

As in the past, I’ll write about whatever is on my mind. Computer programming and wood working (carving, mostly) are probably my most frequent areas of interest, but there’s no telling what I might delve into.

Also note that the format here is likely to change in the near future. I just picked a basic theme to get started. Fiddling with formatting is not high on my list of favorite activities, so it might be a while.

Recovering

One casualty of the 2020 debacle has been my blog. Through a series of unfortunate events, including my deteriorating mental state since the beginning of the lockdown in March 2020, I lost access to my blog and I just didn’t want to mess with it. I still have text of the entries in a database, but it’s in an old format and I haven’t yet looked at how I’m going to import the data. I probably have the images on a backup disk somewhere. It’ll just take time to find and upload them.

For now, I’m just trying to get online again. This is definitely an ugly theme, but it’s a start. My hope is to get this going again in January.

A different way to merge multiple lists

In my previous post, I said that this time I would show full-featured MergeBy and MergeByDescending methods. Before I do that, I want to show an alternate method for merging multiple sorted lists.

If you recall, I said that the generalized k-way merge algorithm is:

    Initialize list indexes
    while any list has items
        Determine which list has the smallest current item
        Output the item from that list
        Increment that list's current index

As I illustrated last time, that does indeed work. But there is another way to do it that is, asymptotically at least, just as fast. That is, it operates in O(n log2 k) time.

Imagine you have four sorted lists, named A, B, C, and D. Another way to merge them is this way:

    Merge A and B to create a list called AB.
    Merge C and D to create a list called CD.
    Merge AB and CD to create the final list.

To estimate the time complexity, let’s assume that each of the four lists contains the same number of items. So if the total number of items is n, then each list’s length is n/4.

Merging A and B, then, takes time proportional to n/4 + n/4, or n/2. Merging C and D also requires n/2 time. The final merge of AB and CD takes n/2 + n/2. So the total amount of time it takes to merge the four lists is 2n.

Time complexity of the k-way merge is, as I said, O(n log2 k). With four lists, that’s n * log2(4)log2(4) = 2, so the time to merge four lists with a total of n items is n * 2, or 2n.

If there is an odd number of lists, then one of the original lists gets merged with one of the larger, already merged, lists. For example, with five lists the task becomes:

    Merge A and B to create a list called AB.
    Merge C and D to create a list called CD.
    Merge AB and E to create a list called ABE.
    Merge ABE and CD to create the final list.

To do that with an arbitrary number of lists, we create a queue that initially contains the original lists. We then remove the first two lists from the queue, merge them, and add the result back to the queue. Then take the next two, merge them, and add the result to the queue. We continue doing that until there is only one list left on the queue. That is:

    Initialize queue with all of the lists.
    while queue has more than one item
        l1 = queue.Dequeue()
        l2 = queue.Dequeue()
        rslt = Merge(l1, l2)
        queue.Enqueue(rslt)
    merged = queue.Dequeue()

At first, it seems like that method will require a lot of extra space to hold the temporary lists. Remember, the merge algorithms I’ve shown so far don’t require much in the way of extra storage: just a little memory to hold the smallest value in each of the lists. That is, O(k) extra space. Perhaps not surprisingly, you can do the same thing with LINQ. Here’s the code.

    public static IEnumerable<T> MergeVersionTwo<T>(IEnumerable<IEnumerable<T>> lists)
    {
        // Populate a queue with all the lists.
        var theQueue = new Queue<IEnumerable<T>>(lists);
        if (theQueue.Count == 0) yield break;

        // Do pair-wise merge repeatedly until there is only one list left. 
        while (theQueue.Count > 1)
        {
            var l1 = theQueue.Dequeue();
            var l2 = theQueue.Dequeue();
            var rslt = Merge(l1, l2); // uses the two-way merge
            theQueue.Enqueue(rslt);
        }
        var merged = theQueue.Dequeue();
        foreach (var x in merged)
        {
            yield return x;
        }
    }

That code uses the two-way merge that I presented a while back.

When building the queue, we’re just setting things up. Everything is an IEnumerable<T>, meaning that no actual work takes place. LINQ’s deferred execution postpones the merging until we start to enumerate the result. And when we do enumerate the result, only one item is produced at a time. All of the intermediate merges take place simultaneously, in steps.

Here’s some code that uses the new MergeVersionTwo method:

    var l1 = new int[] {1, 3, 9, 12, 15};
    var l2 = new int[] {2, 7, 14, 16, 19};
    var l3 = new int[] {6, 8, 13, 17, 20};
    var rslt = MergeVersionTwo(new List<int[]> {l1, l2, l3});

    foreach (var i in rslt)
        Console.WriteLine(i);

If you wrap that in a method and single-step it, you’ll see that no real work is done until you get to the foreach. Even then, all that mucking about with the queue doesn’t enumerate any of the lists until you get to the foreach at the end of the MergeVersionTwo method. If you start single-stepping (in Visual Studio, press F11, or the “Step into” debugger command), you’ll see it start working on the two-way merges. Watching how all that works will help you to understand just how powerful deferred execution can be.

It’s important to note that asymptotic analysis just tells us how the run time varies with the size of the input. It doesn’t say that both of the k-way merge algorithms will take the same amount of time. n log2 k just says that for a particular algorithm the amount of time varies with the size of the input and the logarithm of the number of lists. We’ll have to do some performance tests to determine which is actually faster.

That testing should prove interesting. Before we go there, though, we need to look more closely at the pairwise merge.

Timers don’t keep track of time

In computer programming, the word Timer is used to describe a hardware or software component that raises an event (“ticks”) at a specified interval. So, for example, if you want a notification once per second, you would instantiate a timer and set its period to one second. The timer API often has you specify the period in milliseconds, so you’d say 1,000 milliseconds.

In a C# program, for example, you can create a timer that ticks every second, like this:


    private System.Threading.Timer _timer;
    private int _tickCount;
    public void Test()
    {
        _timer = new System.Threading.Timer(MyTimerHandler, null, 1000, 1000);
        Console.ReadLine();
    }

    private void MyTimerHandler(object state)
    {
        ++_tickCount;
        Console.WriteLine("{0:N0} tick", _tickCount);
    }

It’s important to note that ticks don’t come at exact 1-second intervals. The timer is reasonably accurate, but it’s not perfect. How imperfect is it? Let’s find out. Below I’ve modified the little program to start a Stopwatch and output the actual elapsed time with each tick.

    private System.Threading.Timer _timer;
    private Stopwatch _stopwatch;
    private int _tickCount;
    public void Test()
    {
        _stopwatch = Stopwatch.StartNew();
        _timer = new System.Threading.Timer(MyTimerHandler, null, 1000, 1000);
        Console.ReadLine();
    }

    private void MyTimerHandler(object state)
    {
        ++_tickCount;
        Console.WriteLine("{0:N0} - {1:N0}", _tickCount, _stopwatch.ElapsedMilliseconds);
    }

On my system, that program shows the timer to be off by about 200 milliseconds every three minutes. That’s on a system that isn’t doing anything else of note: no video playing, no downloads, video streaming, or heavy duty background jobs. 200 milliseconds every three minutes doesn’t sound like much, but that’s one second every fifteen minutes, or 96 seconds every day. My granddad’s wind-up Timex kept better time than that.

Granted, the problem could be with the Stopwatch class. But it isn’t. I created a simple program that starts a Stopwatch and then outputs the elapsed time whenever I press the Enter key. I let that program run for days, and every time I hit Enter, it agreed very closely with what my wristwatch said. There was some variation, of course, because no two clocks ever agree perfectly, but the difference between Stopwatch and my wristwatch were a lot smaller than the differences between Stopwatch and the Timer. I have since run that experiment on multiple computers and multiple clocks, and every time the Stopwatch has been quite reliable.

Things get worse. On a heavily loaded system with lots of processing going on, timer ticks can be delayed by an indeterminate time. I’ve seen ticks delayed for five seconds or more, and then I get a flood of ticks. I’ll see, for example, a tick at 5,000 milliseconds, then one at 9,000 milliseconds followed very quickly by ticks at 9,010, 9015, and 9,030 milliseconds. What happened is that the system was busy and couldn’t schedule the timer thread at 6,000 milliseconds, so the system buffered the ticks. When it finally got around to dispatching, three other ticks had come in and it fired all four of them as quickly as it could.

This can be a huge problem on heavily loaded systems because it’s possible that multiple timer ticks can be processing concurrently.

The only thing a timer is good for is to give you notification on a periodic basis–approximately at the frequency you requested. A timer pointedly is not for keeping time.

Given that, you can probably explain what’s wrong with this code, which is a very simplified example of something I saw recently. The original code did some processing and periodically checked the _elapsedSeconds variable to see how much time had elapsed. I simplified it here to illustrate the folly of that approach.

private Timer _timer;
    private int _elapsedSeconds;
    private ManualResetEvent _event;
    private void Test()
    {
        _timer = new Timer(MyTimerHandler, null, 1000, 1000);
        _event = new ManualResetEvent(false);
        _event.WaitOne();
        Console.WriteLine("One minute!");
        _timer.Dispose();
    }
    
    private void MyTimerHandler(object state)
    {
        ++_elapsedSeconds;
        if (_elapsedSeconds == 60)
        {
            _event.Set();
        }
    }

Here, the programmer is using the timer like the second hand of a clock: each tick represents one second. There are at least two problems in that code. First, we’ve already seen that the timer won’t tick at exactly one-second intervals. Second and more importantly, it’s possible on a busy system for multiple timer events to occur concurrently. Imagine what happens in this case:

  1. _elapsedSeconds is equal to 59.
  2. A tick occurs and Thread A is dispatched to handle it.
  3. Thread A increments the _elapsedSeconds value to 60.
  4. Another tick occurs and Thread B is dispatched to handle it.
  5. At the same time, Thread A is swapped out in favor of some higher priority thread.
  6. Thread B increments the _elapsedSeconds value to 61.

At this point, the _elapsedSeconds value has been incremented beyond 60, so the conditional will never be true and the event will never be set. (Well, it might: 4 billion seconds from now, when _elapsedSeconds rolls over. That’s only about 125 years.)

You could change the conditional to _elapsedSeconds >= 60, but that’s missing the point. We’ve already shown that the timer isn’t going to be particularly accurate. If you were trying to time a whole day, you could be off by a minute and a half.

The problem of concurrent execution of timer handlers is another topic entirely that I probably should cover in a separate post. It’s important you understand that it can happen, though, and you have to keep that in mind.

In programming, timers don’t keep time. Don’t try to make them. They can’t count up reliably, and they can’t count down reliably. If you want to keep track of elapsed time, start a Stopwatch and then check its Elapsed or ElapsedMilliseconds properties when necessary.

On a related note, do not use the clock to measure time.