Whittling a knife

Debra and I spent last weekend with our friends at their property near Ranger, TX.  We spent two days just kicking back and relaxing:  reading, talking, and playing with the camp fire.  I took my sharpening stone and three different knives that I’d been neglecting for too long.

After putting an edge on my old Buck knife, I grabbed a piece of firewood from the pile and started whittling on it with no particular idea to make anything.  At some point the idea of making a wooden knife struck me, and I ended up spending several hours on it.

Whittled knife with knife used to make it

Whittled knife with knife used to make it

The project started out as a piece of juniper (what they call cedar around here) about 18 inches long and 1-1/2 inches in diameter.  The finished piece is right at 12 inches long and 1-1/4 inches in diameter.  The blade is 3/4 inch wide.  You can click on the picture to get the full-sized version.

I toyed with the idea of spending more time on it—sanding it down and making the blade thinner at the edge so I could use it as a letter opener—but in the end decided against it.  I like leaving it in this rough form.

It’s been 30 years since I picked up a piece of wood and started whittling on it.  I forgot how relaxing it can be.  I for sure won’t wait another 30 years before I try making something else.  Think I’ll use a different knife, though.  The Buck is a handy tool, but it’s too large for detail work.

That knife, by the way, is sharp.  My old Scoutmaster would be proud.  He’d also be appalled at my clumsiness.  I ended up sinking that blade about 1/4 inch into my thumb on Sunday.

Optimizing the wrong thing

Today I was building a custom hash table implementation and needed a function that, given a number X, would find the next prime number that is equal to or greater than X.  Since X could be pretty large—on order of one trillion—I knew that done in a naive way, it could take a long time (in computer terms) to compute the next prime number.  So I started looking for a smarter way to do it.

The basic algorithm is pretty easy.  Given a number, X, the following will find the next prime number:

V = X
if (V % 2) == 0
    V += 1
while (!IsPrime(V))
    V += 2
// at this point, V is the next prime number >= X

The simplest way to determine if a number is prime is to see if it’s evenly divisible by any number from 2 up to and including the square root of the number in question.  That is, to determine if 101 is prime, you try dividing by 2, 3, 4, 5, 6, 7, 8, 9, and 10.  If any division results in a remainder of 0, then you know that the number is not prime and you can stop testing.  Only if all the tests fail do you know that the number is prime.  But the square root of one trillion is a million, and I didn’t relish the idea of doing many of millions of division operations to find the next prime number.

A much more efficient method to determine whether a number is prime is to divide it by all the prime numbers up to the square root.  It’s a simple optimization, right?  If the number is not evenly divisible by 2, then it certainly won’t be divisible by 4, 8, 296, or any other even number.  And if it’s not divisible by 5, it won’t be divisible by 10, 15, 20, or 2,465.

That insight can greatly decrease the amount of work you have to do in determining whether a number is prime.  After all, there are only 78,500 prime numbers between 0 and 1,000,000.  So the worst case goes from 500,000 division operations to 78,500.  The only catch is that you need to store all those prime numbers.  That costs some memory.  Naive encoding (four bytes per number) would take about 300K of RAM.  But you can use a table of bits and do it in about 125K.

I was halfway to building the table of primes when I decided to see just how long it takes to find the next prime using the naive method.  I quickly coded up the simplest version and got my answer.  On my machine using the naive method, it takes on average 30 milliseconds (3/100 second) to find the next prime number if the starting number is between 1 trillion and 2 trillion.  Granted, that’s a long time in computer terms.  But in context?

The reason I need to compute the next prime number is that in my hash table implementation the hash table size needs to be prime.  So if somebody asks for a table of 1 million items, I’ll give them 1,000,003.  And when I extend the table, I need to ensure that the new size is prime.  Resizing the table requires that I make a new array and then re-hash all the existing items.  That takes significant time when the table is large.  Fortunately, resizing is a relatively rare operation.

The point is that the function is called so infrequently, and the code that calls it takes so much time, that whatever cycles I spend computing the next prime won’t be noticed.  So the “slow,” simple, naive method wins.

I used to rant a lot about programmers spending their time optimizing the wrong thing, persuing local efficiency even when it won’t affect the overall program running time.  It’s embarrassing when I find myself falling into that trap.

Copying large files on Windows

Warning:  copying very large files (larger than available memory) on Windows will bring your computer to a screeching halt.

Let’s say you have a 60 gigabyte file on Computer A that you wish to copy to Computer B.  Both Computer A and Computer B have 16 gigabytes of memory.  Assuming that you have the network and file sharing permissions set correctly, you can issue this command on ComputerB:

copy /v \\ComputerA\Data\Filename.bin C:\Data\Filename.bin

As you would expect, that command reaches across the network and begins copying the file from Computer A to the local drive on Computer B.

What you don’t expect is for the command to bring Computer A and possibly Computer B to a screeching halt.  It takes a while, but after 20 or 30 gigabytes of the file is copied, Computer A stops responding.  It doesn’t gradually get slower.  No, at some point it just stops responding to keyboard and mouse input.  Every program starts running as though you’re emulating a Pentium on a 2 MHz 6502, using a cassette tape as virtual memory.

Why does this happen?  I’m so glad you asked.  It happens because Windows is caching the reads.  It’s reading ahead, copying data from the disk into memory as fast as it can, and then dribbling it out across the network as needed.  When the cache has consumed all unused memory, it starts chewing on memory that’s used by other programs, somehow forcing the operating system to page executing code and active data out to virtual memory in favor of the cache.  Then, the system starts thrashing:  swapping things in and out of virtual memory.

It’s a well known problem with Windows.  As I understand it, it comes from the way that the COPY and XCOPY commands (as well as the copy operation in Windows Explorer) are implemented.  Those commands use the CopyFile or CopyFileEx API functions, which “take advantage” of disk caching.  The suggested workaround is to use a program that creates an empty file and then calls the ReadFile and WriteFile functions to read and write smaller-sized blocks of the file.

That’s idiotic.  There may be very good reasons to use CopyFileEx in favor of ReadFile/WriteFile, but whatever advantages that function has are completely negated if using it causes Windows to cache stupidly and prevent other programs from running. It seems to me that either CopyFileEx should be made a little smarter about caching, or COPY, XCOPY and whatever other parts of Windows use it should be rewritten. There is no excuse for a file copy to consume all memory in the system.

I find it interesting that the TechNet article I linked above recommends using a different program (ESEUTIL, which apparently is part of Exchange) to copy large files.

This problem has been known for a very long time. Can anybody give me a good reason why it hasn’t been addressed? Is there some benefit to have the system commands implemented in this way?

Update, October 16
It might be that Microsoft doesn’t consider this a high priority.  In my opinion it should be given the highest possible priority because it enables what is in effect a denial of service attack.  Copying a large file across the network will cause the source machine to become unresponsive.  As bad as that is for desktop machines, it’s much worse for servers.  Imagine finding that your Web site is unresponsive because you decided to copy a large transaction file from the server.

Hardware Problems

I’ve mentioned before that we use removable drives to transfer data between the data center and the office. Some of those files are very large—50 gigabytes or larger. The other day we discovered an error in one of the files that we had here at the office. The original copy at the data center was okay. Somewhere between when it was created at the data center and when we read it here, an error crept in. There is plenty of room for corruption. The file is copied to the removable, then copied from the removable, transferred across the network, and stored on the repository machine.

The quick solution to that problem is to copy with verify. That takes a little longer, but it should at least let us know if a bit gets flipped.

Saturday we ran into another error when copying a file from the removable drive to its destination on the network:

F: is the removable drive. The machine it was connected to disappeared from the network. I’m still trying to decipher that error message. I can’t decide if we got a disk error or if there was a network error. Did the disk error cause the network error? Or perhaps Windows considers a USB storage device to be a network drive. We removed the drive from that machine, connected it directly to the repository machine, and the copy went just fine. The file checked out okay, leaving me to think that the first machine is flaky.

About a year ago we purchased a Netgear ProSafe 16 port Gigabit Switch (model GS116 v1). It’s been a reliable performer, although it does get a little warm to the touch. Still, we ran it pretty hard and it never had a glitch. We bought another about 6 months ago. Last month, the first one flaked out and started running at 100 Mbps. Not good when you’re trying to copy multi-gigabyte files. This morning, the other one gave up the ghost completely and wouldn’t pass traffic at all.

I suspect that excess heat caused both switch failures. The units were operating in a normal office environment where the ambient temperature is between 75 and 80 degrees. There was no special cooling and we ran the units pretty hard what with the Web crawler and all. As I said, the switch did get very warm to the touch. In a normal office configuration where the switch doesn’t get a lot of traffic, it probably will hold up fine. But I would not recommend this switch for high duty cycles unless you have special cooling for it.

Bailout, Again

Congress never ceases to amaze me.  Following the unexpected defeat of the measure on Monday, the Senate decided to take a crack at writing something that could pass.  The Senate’s technique was nothing short of brilliant.  Rather than debating the need for the measure and perhaps rewriting provisions to make it more palatable, they did what any good politician would do:  they added bribes to sweeten the deal for those Representatives who voted against it. 

Because the House is responsible for introducing legislation that deals with budgetary matters, the Senate didn’t introduce this as new legislation. Instead, the Senate greatly amended existing legislation, H.R.1424, which apparently has been bouncing around between the House and the Senate for 18 months. The Senate bill now contains the original enacting clause:

To amend section 712 of the Employee Retirement Income Security Act of 1974, section 2705 of the Public Health Service Act, section 9812 of the Internal Revenue Code of 1986 to require equity in the provision of mental health and substance-related disorder benefits under group health plans, to prohibit discrimination on the basis of genetic information with respect to health insurance and employment, and for other purposes.

And then:

Strike all after the enacting clause and insert the following:

And there follows the 110 pages of the House’s Emergency Economic Stabilization Act of 2008 and 350 more pages of additional legislation.  The bill, now 451 pages long, contains three divisions:

Division A: The Emergency Economic Stabilization Act of 2008.

This is pretty much the same legislation that failed to pass in the House on Monday.  A notable addition is Section 136, which temporarily raises the FDIC and National Credit Union Share Insurance Fund deposit insurance limits from $100,000 per account to $250,000 per account.  The idea behind this move is to reassure businesses and prevent them from moving their deposits from small banks to larger banks that are viewed as more secure.

Division B: Energy Improvement and Extension Act of 2008

Provides tax incentives and credits for renewable energy, investments in cleaner coal technology and biofuels, plug-in electric cars, energy conservation, and many other things.  It’s a scattershot energy bill that’s been working its way through the legislature as H.R.6049 since May of this year. 

DIvision C: Alternative Minimum Tax Relief Act of 2008

This part began life in June 2008 as H.R.6275, to provide relief from the alternative minimum tax for middle- and low-income taxpayers.  The original version was a few dozen pages long.  The new version is almost 200 pages in length and includes all manner of personal and business tax cuts, credits, and deductions.  And then there are the miscellaneous provisions, among them:

  • The Paul Wellstone and Pete Domenici Mental Health Parity and Addiction Equity Act of 2008 states that health insurance companies have to provide the same level of coverage for mental health benefits as they do for medical and surgical benefits.
  • A reauthorization of the Secure Schools and Community Self-Determination Act of 2000,
  • Disaster relief legislation.

If you read the Table of Contents for each of the divisions, it becomes very clear that the Senate in its infinite wisdom targeted certain provisions to individual parties and even individual states in an attempt to win over those who voted against the original bailout plan.  In doing so, they had to be careful not to add something that would cause members who originally voted for the plan to vote against it.  Honestly, though, I don’t think there was much chance of somebody changing his vote from “aye” to “nay.”

Estimates place the cost of new provisions at about $105 billion, a very large part of which would fit under the category of “pork barrel projects.”  Both McCain and Obama voted for the measure, despite recent public statements about the evils of pork barrel spending.

It’s now five minutes to noon in Washington.  Party whips have been making the rounds in the House, trying to get members’ commitments to support the bill.  The House is set to convene at noon, and will be voting on this bill.  I’d like to think that those who voted against it on Monday will stand their ground, but I fear that politics will once again trump good sense.

What is the problem?

Since Paulson, Bernanke, and Company aren’t forthcoming with layman’s explanations , I thought I’d take a little closer look at the problem to see if I could understand what’s happening here. What I found is that the problem is very large—potentially catastrophic. I’m also astounded by the hubris that got us into this mess, but that’s a topic for another time.

The fundamental problem today is that there is a lack of confidence in the financial system. Not so much by you and me, but by banks that lend money and large institutions that buy the banks’ debt for investment purposes. Investors demand more documentation and higher interest rates, which causes the banks in turn to charge higher interest rates and to be more careful in selecting borrowers. Ultimately, fewer individuals and companies will qualify for loans, and those who do will end up paying higher interest rates. Credit hasn’t dried up, but it’s a lot less available than it was previously. Creditors and investors got burned (in large part by their own greed), and now they’re being very (perhaps overly) cautious.

A key factor in the lack of confidence is that financial institutions’ portfolios have large numbers of assets that are difficult or impossible to value. A simple example would be mortgage backed securities. A mortgage backed security is, in essence, a bundle of loans that a lender has packaged up and sold as a single investment instrument. The price the bank asks for the package is based on the quality of the mortgages that make it up: their face value, the interest rates they pay, their maturities, the quality of the collateral backing them, and the credit worthiness of the borrowers. Industry rating agencies such as Moodys and Standard & Poor’s assign ratings that give some indication as to the risk level of such securities. Investors demand higher interest rates (or, sometimes, lower prices) for more risk.

But there’s a problem. It’s exceedingly difficult to know the real value of even one mortgage. Imagine the difficulty involved in computing the real value of a box that contains 100 mortgages. If you buy the box directly from the originating bank then you can have some confidence in the box’s value. Assuming, of course, that you trust the bank and the rating agency. But if you as an investor want to sell that box on the open market, you’re subject to the whim of the market. When things are going well, that’s a good thing. People are willing to spend $1,000,000 on that box of low-risk loans that pay an average of 5% interest because they have confidence that they’ll get their payments and that they can re-sell the box if they need the cash.

But the market can go sour. If a bank starts reporting higher levels of loan defaults, then every box of loans that the bank sold becomes less valuable because the probability of the box actually paying the stated interest or being sold at face value is lower. You could find that the market value (the price people are willing to pay for) your million-dollar box of loans is only $900,000.

Because of the difficulty in valuing mortgage backed securities (and many other investment vehicles), a very commonly used method of determining their value is mark to market. The box’s value is what people are willing to pay for it. Generally accepted accounting principles state that financial institutions use mark to market in determining their assets’ values. And transparency laws on the books insist that banks and other regulated financial institutions calculate the value of their assets on a daily basis.

Now, imagine that you’re a small bank. You take deposits from customers and lend money to individuals and local companies. You also buy high-quality (i.e. low-risk) mortgage backed securities to invest the depositor’s money. You’re very conservative with your investments, always get good documentation from your borrowers, and everything’s going great. Whenever you need a little cash to meet depositors’ demands, you sell one of those securities on the open market. This is how the financial system works when people have confidence in it.

But if the bank from whom you bought those mortgage backed securities starts seeing a higher than usual number of loan defaults, the value of the security you’re holding in your vault declines. As that bank’s troubles deepen, so do yours because it becomes increasingly difficult to sell those securities–not only at face value, but at all. If the originating bank defaults, you could very well end up unable to find a buyer for your box of loans. At any price.

You know that if you could open the box and show the individual loans to potential buyers, you could at least salvage some value from your investment, but that’s not the way it works. Your investment is essentially worthless. And since you have to report your financial condition on a daily basis, it doesn’t take long for the public (your depositors) to find out and begin demanding their money. You’re wiped out, and you didn’t do anything wrong.

I’m not just spinning a fantasy here, by the way. Such things do happen, and they have a cascade effect. As more securities become worthless, investors become less inclined to buy any securities. Institutions who hold those securities are unable to sell them in order to meet other obligations, and the financial system grinds to a halt.

That’s it in a nutshell. I’m sure there are economists who will say that the problem is more complex than that. In some ways, that’s true. I haven’t mentioned the more creative investment vehicles (like tranches) and the mania that helped get us where we are. I’ve tried to concentrate on what the problem is, rather than what caused it.

The problem, then, is that financial institutions are holding securities that nobody wants to buy. The securities’ values, based on mark to market, are essentially zero because there is no market for them. The truth is that some of those securities are very valuable and some really are worthless. But right now there’s no way for buyers to know which is which. The point behind the bailout is to create a market—to buy securities and by doing so restore some investor confidence so that they, too, will begin buying.

So that’s the problem and the rationale behind the proposed solution. Whether the proposed solution is right or even necessary is still a matter of some debate among economists, and also in my own mind.


A sample text widget

Etiam pulvinar consectetur dolor sed malesuada. Ut convallis euismod dolor nec pretium. Nunc ut tristique massa.

Nam sodales mi vitae dolor ullamcorper et vulputate enim accumsan. Morbi orci magna, tincidunt vitae molestie nec, molestie at mi. Nulla nulla lorem, suscipit in posuere in, interdum non magna.