Long ago, I was a staunch supporter of laissez faire capitalism, believing that private enterprise was much more efficient and effective at self regulation than any kind of government intervention. But my support of laissez faire was based on two assumptions that proved wrong:
- Businesses would operate morally. That is, they would give value for value, and not take advantage of customers, employees, or suppliers.
- People could choose who to do business with, and had the means to find out about companies’ business practices.
All too often, one or both of those assumptions turned out to be wrong. I found that businesses would often operate dishonestly and would make it difficult for others to find out about it. In addition, at the time I was re-thinking my position (late 80s and early 90s), significant natural barriers existed which prevented the average person from finding out about a company’s business practices.
In theory, government’s role in laissez-faire capitalism is to serve as an arbiter–to supply a system of courts where disputes can be resolved. But in practice, the game is rigged in favor of the party with the most money. Rarely can an individual emerge victorious from a dispute with an unscrupulous corporation.
After a lot of internal turmoil, I came to the uncomfortable conclusion that some bit of government regulation is required. I don’t like that conclusion, because I have a very strong (and I think well-placed) distrust of government. But when the kids won’t play nice together, somebody has to be the daddy and enforce some civility. The question, then, becomes “How much government regulation is best?” My answer for the last 20 years has been, “as little as possible.”
I believed that minimal government regulation would keep companies in line: forcing them to operate more transparently, and preventing the most egregious abuses. It’s like putting a simple lock on a door to keep honest people honest. My belief in limited government regulation was based on something that I thought was universally true: Individuals and corporations will operate in their own self interest, and enlightened self interest is a much better regulator than any government.
I still believe the second part to be true, but the assumption that people will act in their own self interest turned out to be wrong. Shockingly so.
Some people will lay all of the blame for the credit crisis and the current economic situation on government. Extremists on one side will say that laws such as the Community Reinvestment Act and its ilk caused the problem. The other side holds that government didn’t do enough to pump liquidity into the market, or that it didn’t regulate enough. I’ve seen that debate go back and forth like a tennis ball.
The problem isn’t near that simple. There’s certainly plenty of blame to be laid on government’s doorstep. Encouraging (one might say requiring) banks to make high-risk loans was a bad idea that was compounded by failing to exercise any oversight. Failing to issue any kind of warning or institute any kind of control over high-risk investment vehicles was another huge mistake.
The majority of the blame for the current situation rests, not on government’s shoulders, but on the shoulders of the individuals who made a lot of really stupid mistakes for a very long time. No, that’s being too charitable. They didn’t make mistakes. They consciously chose short-term gain over long-term consequences, most often with the certain knowledge that at some point they would have to pay. The best recent example of that is Bernad Madoff Investment Securities, although one only has to look at any bank that’s failed or taken government bailout money over the last year to see that the problem was wide-spread.
Plenty of people warned about the dangers inherent in the kind of “investing” that had become popular over the last few years. It’s inconceivable to me that boards of directors and company CEOs didn’t know that they couldn’t keep it up forever. And yet they kept trying! Not only that, but they actively discouraged their employees from warning about the dangers. I’ve seen several reports of employees being fired because they cautioned against participating in the madness.
Companies and individuals did not act in their own self interest. They acted like a mob in a frenzy, trying to get as much as they could as quickly as possible, damn the long-term consequences. I can understand individual employees acting in that manner. I cannot understand it being condoned and even encouraged by CEOs and boards of directors. It has shaken my belief in free enterprise.
Understand, banks and financial institutions are not the only ones who acted so stupidly. Any mortgage broker who made a “stated income loan” or some such, any builder who over-extended himself in the hopes he could make a quick buck before sanity was restored, and anybody else who knowingly acted against his own long-term self interest is equally to blame. I’ll lump in the Big Three auto makers there, as well. The financial crisis didn’t wipe them out. They were doing a fine job of wiping themselves out over the past 10 years or so.
Free enterprise assumes that people will act in their own self interest. The idea is that what’s good for a business long term is also good for its customers and employees. But if we can’t count on businesses to operate in their own self interest, then what can we do? Government regulation can help in some ways, mostly to prevent businesses from defrauding their customers or taking advantage of their employees, but it can’t force businesses to make a profit. On the contrary: whether intended or not, much government regulation seems to be aimed at preventing profits of any kind. In any case, we know that complete regulation is a bad idea that’s been tried and failed many times.
I’ve heard the argument that we haven’t really tried free enterprise: that any government intervention in business destroys the system such that we can’t say whether or not the system would work. Do the laissez faire propopents expect us to believe that companies will all of a sudden start acting morally if all regulations are eliminated? There is no evidence that such a thing would happen, and plenty of evidence to the contrary.
That said, I continue to be highly skeptical of government regulation, and I’m strongly opposed to government control of or even intervention in economic matters. But events of the last few years have left me almost equally skeptical of private enterprise, particularly large corporations whose boards of directors appear to be more concerned with today’s share price than with running a profitable business.
How do we ensure ethical business practices without stifiling the free and fair exchange of goods and services? Can it be done through private regulation (i.e. market forces)? If not, is there a “sweet spot” of government regulation that can do the same unobtrusively and efficiently?
My Nikon Coolpix 4600 digital camera stopped working the other day. It just wouldn’t turn on. When I hit the power button, the LED on the top would start blinking. For a while. The camera must have been doing something, though, because it’d drain a pair of batteries in just a few minutes.
An online search revealed a number of people who had the same problem. The “solutions” offered usually fell into one of two categories: send the camera off to be repaired (a hard reset and a firmware upgrade) at a cost of $30 to $100, or pitch the camera and buy a new one.
I finally stumbled onto the answer: remove the memory card, put in new batteries, and hold down the “Image Review” button while simultaneously pressing the “On” button. That reset whatever weird mode the camera was in, and now it’s working just fine. Sure beats either of the recommended solutions.
We’ve been using a number of different computers as file servers here at the office, but we’re to the point now that we really need some kind of centralized data storage. It’s one thing to have a single machine storing a few hundred gigabytes of data. It’s something else entirely to scatter multiple terabytes across four or five machines, and then struggle to remember what’s where.
Last week we picked up two network attached storage (NAS) boxes: a Thecus N5200, and a Thecus N7700. The 5200 supports 5 drives and will be used primarily for offsite backup. The 7700 supports 7 drives and will be our primary (only, hopefully) file server.
Setting these things up turned out to be quite an experience. Not because of any problem with the Thecus boxes. No, those things are wonderful, with very good documentation and a nice browser-based administration interface. We had problems with the drives we bought to put in our fancy new RAID arrays.
Seagate recently released their Barracuda 7200.11, 1.5 terabyte drive. We managed to get a great deal on the drives (about $110 each), and we picked up enough to populate the NAS boxes, plus a few high-powered machines here.
It turns out that early versions of the drive’s firmware have a bug that causes the drive to freeze and time out for minutes at a time, which in turn causes RAID controllers to think that the drive has failed. The results aren’t pretty. Fortunately, there’s a firmware upgrade available. I downloaded mine from the NewEgg page for the drive. Look on the right side about halfway down. (This wasn’t a surprise. We knew about the problem and about the firmware upgrade before we bought the drives.)
Applying the firmware upgrade turned into quite an experience. You see, in order to apply the upgrade you need a DOS prompt. Not a Windows command line prompt. Once you manage to get a machine set up and booting FreeDOS from diskette or CD (you can’t boot from the hard drive because the firmware upgrade program wants to see only one hard drive), you can run the firmware upgrade. It takes less than two minutes to boot the machine and apply the update. Then the drive is ready to go, right?
Silly me. I upgraded the firmware on five drives, put them in the N5200, and started the thing up. Surprisingly, the N5200 reported that the drives were 500 GB, not the 1.5 TB that I thought I had. But the label on the drive says 1.5 TB. Whatever could be going on?
It turns out that one of the many things you can do in the drive’s firmware is set the size. Want to turn your terabyte drive into a 32 gigabyte drive? No problem! A set of utilities from Seagate called SeaTools (download the ISO and burn to CD, then boot from the CD) includes diagnostics and an interface for setting the drive’s capacity. One option is “Set capacity to max native”. For me, SeaTools reports that setting the drive’s capacity failed, and adds “be sure that drive has been power cycled.” When I turn the machine off and back on, the drive reports 1500.301 gigabytes. There’s my 1.5 terabyte drive.
After upgrading the firmware on all drives and using SeaTools to set their capacities, I finally managed to get the RAID arrays set up. The N5200 is running RAID-5, giving us about 5.4 terabytes for our offsite backup. The N7700 is running RAID-6, giving us about 6.5 terabytes of live data in a single place. That should hold us for a while.
A couple of notes on the Thecus boxes:
- Initial setup of the Thecus is a little inconvenient. The default IP address is 192.168.1.100, so I had to cobble together a network from an old switch and hook my laptop to it. Once I changed the IP address to fit on our subnet (10.77.76.xxx), setup went quickly. There might be a way to change the IP address from the front panel. If so, that would probably be easier than throwing a network together.
- The N5200 took almost 24 hours to format and create the RAID-5 array with five 1.5 terabyte drives. The N7700 took about 8 hours to format and create the RAID-6 array with seven 1.5 terabyte drives. I suppose this is just the result of better hardware and firmware.
- There must be some magic incantation to configuring the date and time settings. If I set the date and time, and tell it not to update with an NTP server, everything works just fine. But if I enable NTP update (manual or automatic), then the time is totally screwed up. One box was 11.5 hours slow, and the other was a few hours fast. (As an aside, I have another piece of equipment that insists on reporting the time in UTC, even though I’ve set the time and told it that I’m in the US Central time zone. I’m beginning to believe that *nix-based servers don’t like me.)
- The Thecus boxes do way more than just serve files. We probably won’t use all those features, but others might. I especially like the support for USB printers, and the built-in FTP server. With DHCP enabled and machines connected to a switch off the LAN port, one of these things is a single-box subnet. I don’t know what kind of traffic will pass from the WAN to the LAN ports on these things, but if it’s fully blocked it’d make an effective home router to connect to a cable modem.
Anyway, we’re in the middle of copying data and retiring or re-tasking some of our old file servers. This is going to take some time. A gigabit network is quick until you start copying multiple terabytes. . .
It’s been an interesting few weeks here. We’ve been collecting data much faster than we anticipated, and we’ve had to upgrade hardware. One thing we’ve had to do is bring several of our servers from 16 gigabytes of RAM to 32 gigabytes.
Memory is surprisingly inexpensive. You can buy four gigabytes of RAM (2 DIMMs of 2 gigabytes each) for $55. That’s fine for maxing out your 32-bit machine, or you can bring a typical 64-bit machine to eight gigabytes with those parts for only $110.
Most server RAM is more expensive–about double what desktop memory costs. In addition, most servers only have eight memory slots, making it difficult or hideously expensive to go beyond 16 gigabytes. The reason has to do with the way that memory controllers access the memory. Controllers (and the BIOS, it seems) have to know the layout of chips on the DIMM, and most machines are set up to use single-rank or dual-rank RAM. A 2-gigabyte DIMM that uses single-rank RAM will have eight (or nine, if ECC) 2-gigabit chips on it. A dual-rank DIMM will have 16 (18) 1-gigabit chips. Dual-rank will typically be less expensive because the lower density chips cost less than half of the higher density chips.
The inexpensive 2-gigabyte DIMMs are typically dual-rank, meaning that the components on it are 1-megabit chips. If you want a 4-gigabyte DIMM, then you have to step up to 2-gigabit chips. And those are very expensive. The other option is to buy quad-rank memory, which uses the 1-gigabit chips. Quad-rank 4-gigabyte DIMMs for servers are currently going for $70 or $80. Figure $20 per gigabyte.
The only catch is that most older computers’ memory controllers don’t support the quad-rank DIMMs. I do know that Dell’s PowerEdge 1950 server with the latest BIOS supports quad-rank. The Dell 490 and 690 machines do not.
If you’re in the market for a new computer that you expect to load up with RAM, you should definitely make sure that it supports quad-rank memory. If you’re adding memory to an older machine, you might save a lot of money by doing some research to see if you can upgrade the BIOS to support quad-rank.
Updated 2008/12/13, see below
A while back I mentioned that I was unable to burn a CD on my Windows Server 2008 box. At that point I didn’t have time to figure out what was going on. Today I needed to burn a CD, and had some time to fiddle with it.
On my Windows XP development box, I used ISO Recorder to burn ISO images to CD. There’s not much to the program: just right-click on a .iso file and select “Burn to CD” from the popup menu. It’s simple and it works. I wish all software worked so well. Unfortunately, it doesn’t appear to work on my Server 2008 box. The program doesn’t recognize my CD/DVD burner.
A bit of searching located ImgBurn, a free CD/DVD reader and recorder that works on all versions of Windows, including 64-bit versions. This is more of a full-featured application than ISORecorder, but still quite easy to use. It took me no time at all to start the program and tell it to burn an ISO image to CD.
I don’t know why ISORecorder won’t recognize the CD burner on my box. The program says it works for Vista, so I imagine it should also work on Server 2008. But it doesn’t and I don’t have the time or inclination to figure out why. I’ve found ImgBurn, and I’m happy.
I tried to burn another CD today, and ImgBurn failed to recognize the recorder. It turns out that I have to run the program with elevated privileges (i.e. “Run as Administrator”). I didn’t have to do that the first time, because I started ImgBurn from the installation program, which was already running as Administrator.
Also of note: ImgBurn will not work if Windows Media Player is running. At least, it won’t on my machine. Media Player apparently locks or otherwise allocates the optical drive by default. Perhaps there’s a way to turn that “feature” off, I don’t know.
I suspect that ISORecorder will work, too, if I try it in Administrator mode. The beauty of ISORecorder is that all I have to do is right-click on a .ISO file, and it will be written to the CD. But I don’t know how to make that program run with Administrator privileges.
We noticed Charlie having some trouble walking about two weeks ago, and the Saturday after Thanksgiving it got bad enough that we had to take him to the vet. His back legs were working, but not well, and he was whimpering a bit as if in pain. When that dog starts showing pain, you know there’s something wrong. After x-rays of Charlie’s spine and a night of observation, we were referred to a specialist.
We went to the specialist on Tuesday of last week. He did a myelogram, consulted with a radiologist, and diagnosed a ruptured disk or some other type of blockage that was preventing Charlie’s back legs from working fully. He recommended surgery, as those kinds of injuries don’t typically fix themselves. The surgery was yesterday (Monday).
Charlie came through the surgery fine. He’s still at the vet, though, recovering. Unfortunately, the doctor didn’t find what he was looking for: no evidence of a ruptured disk or other type of injury to the spinal column that would cause the blockage. He did, however, see some swelling in the area, which would present as a spinal injury in the myelogram.
The most likely diagnosis now is a fibrocartilaginous embolism (FCE):
FCE results when material from the nucleus pulposus (the gel-like material which acts as a force-absorbing cushion between two vertebrae) leaks into the arterial system and causes an embolism or plug in a blood vessel in the spinal cord. The condition is not degenerative, and therefore does not worsen. FCE is not painful for the pet, but some permanent nerve damage is likely. Roughly half of all patients diagnosed with FCE will recover sufficient use of their limbs.
Searching for “fce dogs” on the Internet will bring up some frightening pages, many of which indicate that the neurological damage is permanent. After last night’s reading, I was resigned to Charlie being partially paralyzed for the rest of his life. But after talking with the doctor today and reading some case studies, I’m much more hopeful. The doctor, based on his experience with about 200 FCE cases, says that there’s a 60 to 70 percent chance that Charlie will recover fully.
It’s unfortunate that he had to go through what turned out to be an unnecessary surgery, but all the tests indicated that the surgery was the proper course of action. Charlie’s pretty miserable right now, but he’s still relatively young (7 years old), and very strong. I expect he’ll be recovered from the surgery very quickly, and then we can see about getting some of his mobility back.
In a very thinly reported move last week, the Federal Reserve announced that it will spend up to $600 billion buying up obligations of government-sponsored enterprises, and mortgage backed securities, many of which were the original targets of the $700 billion “bailout” plan back in October.
To me, the most interesting thing about this move is that it won’t cost the Federal Reserve anything. Well, whatever it costs for paper, ink, and the electricity to run the presses while they print up $600 billion in crisp new bills. That’s right, they’re going to “pay” for those assets by increasing the amount of money. Not a bad racket if you can get it. All of a sudden the Fed has $600 billion to spend that it didn’t have to get by running, hat in hand, to Congress. Nope. Just print up the bills and nobody’s the wiser. Never mind that the dollars you’re holding today are worth slightly less than they were yesterday.
Something similar is happening when the Treasury injects money into the banks. In exchange for their $25 billion, a bank gives the government shares of stock. They’re special non-voting shares, which is a good thing, but they’re new shares that dilute the value of existing shares. Stockholders end up paying for it in two ways: the value of their shares is diluted, and they have to pay increased taxes to offset the government expense (or, if new money is printed, pay the hidden tax of inflation).
How about a simple example to illustrate the concept. Imagine that your neighborhood creates a lawnmower co-op. For $10, you get one of 10 shares in the co-op. But the co-op falls on hard times–needs to upgrade the mower, maybe. The co-op issues 10 new shares, bringing the total number of shares to 20, and sells those shares to new members at $5 each. All of a sudden, your share of the co-op is worth 50% less. That’s bad enough, but you also get hit with an assessment from the co-op for increased maintenance costs. That’s pretty much what’s happening to stockholders when the banks take the bailout money.
I thought, back in September, that Bernanke, Paulson, and company had a real plan, backed up with solid reasoning, and some idea of what effects their proposed policies would have on the financial system and the economy. I didn’t agree with their ideas, but I thought I could at least respect their reasoning. But three months later, I’m not so sure. I think that they, just like Congress and the Bush Administration in general, are trying anything in an attempt to get a short-term boost, regardless of the long-term consequences.
The best bet for everybody would be for governments to step back and take an honest critical look at the situation, study possible responses and their likely outcomes, announce a plan, and then stick with it. The current whac-a-mole approach is worse than doing nothing (which, come to think of it, still doesn’t sound like a bad idea). Changing approaches every few weeks does nothing but add uncertainty, causing people to overreact and creating chaos when the thing we need most right now is stability.
Overall, I like working with C# and the .NET Framework. But sometimes I just can’t imagine what the designers were thinking when they put some things together. High on the list of things I don’t understand is the lack of a ForEach extension method for the generic IEnumerable interface.
IEnumerable extension methods were added in .NET 3.5 to support LINQ. These methods enable a more functional style of programming that you just couldn’t do with earlier versions of C#. For example, in earlier versions of C# (before 3.0), if you wanted to obtain the sum of items in an array of ints, you would write:
int a = new int;
// some code here to initialize the array
// now compute the sum
int sum = 0;
foreach (int x in a)
sum += x;
In .NET 3.0, the Array class implements the generic IEnumerable interface, and that interface has a Sum method to which you pass a Lambda expression. Computing the sum of items becomes a one-liner:
int sum = a.Sum(x => x);
That syntax looks pretty strange, but it takes very little getting used to and you begin to see how very powerful this functional style of programming is.
IEnumerable extension methods let you do lots of different things as one-liners. For example, if you have a list of employees and you want to select just those who are female, you can write a one-liner:
var FemaleEmployees = Employees.Where(e => e.Gender == 'F');
There are extension methods for many specific things: determining if all or any items in the collection meet a condition; applying an accumulator function over the list; computing average, minimum, maximum, sum; skipping items; taking a specific number of items, etc. But there is no extension method that will allow you to apply a function to all items in the list. That is, there is no IEnumerable.ForEach method.
I do a lot of list processing and my code is littered with calls to the IEnumerable extension methods. It’s very functional-looking code. But when I want to do something that’s not defined by one of the extension methods, I have to drop back into procedural mode. For example, I want to apply a function to all the items in the list:
foreach (var item in MyEnumerable)
That’s crazy, when I should be able to write:
MyEnumerable.ForEach(item => DoSomething(item));
The lack of ForEach is terribly annoying. What’s worse is that ForEach does exist for arrays and for the generic List class. It’s kind of strange, though. With arrays you have to call the static Array.ForEach method:
Array.ForEach(MyArray, item => DoSomething(item));
For generic List collections, it’s an instance method:
MyList.ForEach(item => DoSomething(item));
And other collection types simply don’t have a ForEach method.
It’s simple enough to add my own ForEach extension method:
public static class Extensions
public static void ForEach<T>(this IEnumerable<T> source, Action<T> action)
foreach (var item in source)
That’s what I did, and it works just fine. But it seems such an obvious addition (i.e. everybody would want it) that I can’t for the life of me understand why it wasn’t included in the first place.
I know the guys at Microsoft who designed this stuff aren’t stupid, so I just have to think that they had a good reason (at least in their mind) for not including a ForEach extension method for the generic IEnumerable interface. But try as I might, I can’t imagine what that reason is. If you have any idea, I’d sure like to hear it.