How effective is interval training?

I ran across an interesting Associated Press article aggregated on Yahoo today: Intense Interval Training Deemed Effective.  Researchers measured the strength and endurance of two groups of college students, all of whom were healthy and exercised regularly, but weren’t really athletes.  One group did three 30-minute interval sessions per week for two weeks.  The other group did no specific training, but continued their normal activities, including basketball, jogging, or aerobics.

It’s little surprise to learn that endurance and strength improved in the group that did the interval training.  They nearly doubled their endurance, although how endurance was measured was not specified.  That the control group showed no change also doesn’t come as much of a surprise.

The surprising thing about this research is how shallow it is.  Athletes have known for years that focused interval training is a valuable part of an overall training schedule.  If you ever played football or ran track in high school, you probably remember your coach making you run sprints until you thought you were going to puke or pass out.  Working the body at high heart rates (above your lactate threshhold) for brief periods builds strength and increases your ability to use oxygen.  If the measure of endurance is how long you can operate at a high heart rate, then of course interval training will increase your endurance.

The article does state that interval training is not effective as a normal exercise regimen because it’s difficult, painful, and doesn’t burn enough calories.  Most people do not have the motivation to complete even one interval training session without a lot of encouragement from somebody else who’s right there:  a coach or a workout partner.  Four to seven 30-second all-out efforts is hard.  Your lungs burn.  Your legs ache.  Your head spins.  After two or three intervals, you can’t believe how long 30 seconds is.  Halfway through the workout you start thinking, “Why am I doing this?  Why should I put myself through such pain?”  Without somebody pushing, only the most highly motivated individuals will push through the pain and complete the interval workout at 100% effort.  I know.  I’ve done it.  Interval workouts are effective, but they’re unbelievably painful.  A 30 minute interval workout in the morning will leave you tired for the rest of the day.

There are other disadvantages to interval training.  The most common problem that athletes experience is over training.  If one interval workout per week is good, wouldn’t two or three be better?  All too many athletes–even advanced athletes–over do the interval training by not giving their bodies time to rest and rebuild.  They show improvement for the first few weeks–maybe a month–and then their performance levels off or falls.  The most common reaction then is to increase the intensity, until finally they injure themselves:  pull a muscle, develop a stress fracture, or just run their bodies down to the point that they have no energy.

Endurance cyclists–those who compete in 24-hour racesParis-Brest-Paris, the Race Across America, and similar events–have tried replacing much of their aerobic conditioning with intervals, and have mostly failed.  Interval training is important for building strength and endurance at high heart rates, but a 30-minute interval workout cannot help with conditioning your back, neck, shoulders, forearms, wrists, butt, and feet for the rigors of spending all day in the saddle.  Believe me, I’ve tried it.  Although my legs and cardiovascular system were just fine after 100 miles the rest of my body was wiped out.  Interval training is a valuable addition to a training program, but certainly not a substitute for hours on the bike and the aerobic conditioning you need for all around health.

How to securely destroy a CD or DVD?

Related to the previous note, how the heck do I securely destroy a CD or DVD?  I know I can make it unreadable to a standard drive by shattering it or popping it in the microwave for about two seconds.  But that doesn’t really affect the bits that are written to the media.  Could somebody melt the plastic cover off the CD and read the raw media using techniques similar to those used to reconstruct information from damaged hard drives?  Seems to me the best way would be to shred the media.

Several companies make devices that purport to make your data unreadable.  Aleratec makes a DVD/CD shredder that doesn’t actually shred the media in the normal sense.  It simply impresses a waffle pattern on the plastic covering, making the CD unreadable by any normal means.  That would stop the average dumpster diver from stealing your information, but it wouldn’t deter somebody who is more motivated.

Many companies make devices that actually shred the media in much the same way that a paper shredder shreds paper.  In fact, some paper shredders will shred CDs, DVDs, and diskettes.  Provided the resulting pieces are small enough, it would be very difficult (but not impossible) for even the most motivated black hat to obtain any useful data.  This option gives the most bang for the buck.

As with hard drives, the only sure way to destroy the data would be to melt the media into so much slag.  I suspect a machine to do that would be way too expensive unless you really don’t want the government to have any chance at reconstructing your data.  But if they were willing to go to that extreme, they’d probably be better off getting a warrant and seizing your computer.

I think I’ll invest in a new paper shredder.

I’m looking for a program . . .

I’m not sure what to call the program I’m looking for.  I thought it would be called an archive manager, but a Google search on that term returns backup programs, file managers, and other file manipulation or viewing tools, but nothing like what I’m looking for.  Let me explain.

I have CD backups of my systems going back almost 10 years.  Some of the data on them is over 20 years old.  We’ll leave the discussion of how useful most of that stuff is for another time.  The thing is, I have a dozen or more CDs with archives of old email messages, old source code, and all manner of stuff.  I’m afraid to throw any of the old CDs out because I don’t know if they contain stuff that isn’t on the newer backups.  It’s clear, though, that I can’t continue to add to my backup CD collection indefinitely.

What I want to do is consolidate the backups into a single directory structure, remove duplicate information, and make sure that the most recent version of any duplicated file is the one that’s kept.  I know I can’t fully automate the process, as I’ll want the ability to manually resolve any changes, but a program should be able to do most of the grunt work for me.  Here’s what I envision.

  1. Copy the entire contents of the oldest backup CD to a directory on the hard drive.
  2. Insert the next most recent backup and start a program that will compare the new CD with the existing file structure.
  3. All files that exist on the new CD but not on the original are copied without question.
  4. Files that have duplicate names, modification dates, sizes, and contents (using a CRC or MD5 hash) are ignored (not copied).
  5. Files that have duplicate names but different dates or contents are copied to the destination and assigned a version number, and are flagged in the user interface for further action.

After each CD is processed, I manually resolve the changed files by deleting one version, or by renaming so that the most recent version is the one that’s kept.

If I apply that methodology to every one of my backup CDs, by the time I’m done I should have a single top-level directory that contains all of the information from the 10 years of backup CDs.  I can then go through that directory and delete anything I no longer want to save (I really don’t need those emails from 1985), burn a single CD, and delete the working directory from my hard drive.  The next time I want to do a backup, I copy the backup CD to a working directory and then use the program to make the comparisons of the data on my hard drive with the backup image, copy the necessary files, and allow me to burn the result to CD.

I have to think there’s a similar program available.  But I don’t know what it is or even what to call it.  Any ideas?

Does system configuration really have to be this difficult?

If I were to formalize my approach to debugging, the first rule would be “identify the cause of a problem before you attempt to fix it.”  Poking around making changes that might possibly fix the problem without first stepping through the code and seeing where things go wrong exposes you to several risks:  you can make the problem worse, you can create other problems, or you can mask the original problem temporarily only to find it reappear at some later time.  Unless you know what’s causing a problem, you cannot reliably say that you fixed it.  The worst thing that can happen is to have a bug “just disappear.”

That rule holds for debugging other things, too.  You wouldn’t start randomly replacing parts on your car to eliminate a squeak, or go out and buy a new battery without first testing and re-charging the existing one.  The process of debugging is methodical:  locate, as closely as possible, the cause of the problem, and then change one thing at a time until the problem is solved.  Any other approach is a waste of time and an invitation to cause more problems.

As computers and operating systems have become more complicated, it’s become increasingly difficult to locate the causes of problems.  For any given set of symptoms, there are untold possible culprits.  Is it a hardware or software problem?  Is the problem with the driver, the application settings, or a conflict with another program or device in the system?  I’ve seen Microsoft Word crash because of a bad video driver.  Why the video driver should cause an application program to crash is beyond me, but it is (or at least it used to be) fairly common.

Of all the subsystems I’ve had trouble with over the years, sound has been the worst.  In the 13 years or so since I’ve had sound on the computer, the only one I never had trouble with was the original SoundBlaster that I bought in 1992 or 1993.  Low audio, excessive hiss, inoperative microphone, IRQ and driver conflicts, application conflicts or poor application software, you name it and I’ve probably experienced the problem.  I have no idea why the sound subsystem has to be so difficult.

Today I was trying to use the integrated SigmaTel sound hardware on my laptop to record some speech, and I couldn’t get any audio.  My first suspicion was that the microphone was bad, so I tried a different one.  No dice.  I booted my old system and plugged my headset into that.  The microphone worked like a champ.  So the problem is with the laptop.  I poked around in the settings, but didn’t find anything promising.  A quick search of Dell’s support site revealed that there is a new driver (released on June 20) that addresses some problems.  Downloading and installing the new driver didn’t fix the problem.  Then I ran the diagnostic utility.  It showed that the hardware was working fine, and when I ran Sound Recorder again it actually recorded my voice.

Why did running a diagnostic change the behavior of the sound subsystem?  Will I have to run the diagnostic so that it can perform some magic every time I want to use the microphone?

When people learn that I program computers for a living, they often ask me if I can help them with a problem.  My standard response these days is:  “I just program these machines.  I have no idea how to actually use them.”  They laugh, but there is quite a bit of truth in that statement.  I like for things to make sense, and as time goes on there is less and less sense to be made out of configuring my computer.  Maybe I’m just getting old.

Supreme Court further erodes private property rights

The U.S. Supreme Court on Thursday released its opinion in the Kelo v. New London (PDF) case.  When I saw a news headline saying that the decision had been handed down I decided to read the entire decision myself before reading any of the commentary.

In 2000, the city of New London, CT approved a development plan intended to revitalize the city.  The 90 acres to be redeveloped were adjacent to a large manufacturing facility proposed by the pharmaceutical company Pfizer.  As part of the development plan the city obtained property from willing sellers and attempted to use its power of eminent domain to force the remainder of the property owners to relinquish their property for “just compensation” as required by the Fifth Amendment of the Constitution.  Several of the landowners protested, arguing that the takings violated the “public use” clause of the Fifth Amendment.

The city argued that economic development is a valid reason for exercising eminent domain, even though the property in question would eventually be sold or leased to private parties.  The case worked its way through the Connecticut courts and finally to the State Supreme Court, which held that the takings were valid.  The U.S. Supreme Court agreed to hear the case.  Thursday the Court ruled in favor of the city, upholding in a 5-4 decision the ruling of the Connecticut Supreme Court.

I was dismayed when I first heard about the decision, thinking that this was a huge departure from the idea of private property.  But in reading the Court’s decision I learned that this is just one more step down that road.  Two prior cases in particular have established government’s ability to use its power of eminent domain to transfer property from one private party to another.

In Berman v. Parker (1954), the Court upheld a redevelopment plan targeting a blighted area of Washington, D.C. in which most of the housing was beyond repair.  The city invoked eminent domain to condemn the property, use some of it for public purposes, and sell or lease the remainder to private parties.  To complete its plan the city also took some non-blighted property and the owner protested.  In a unanimous ruling the Court ruled in favor of the city, stating:

We do not sit to determine whether a particular housing project is or is not desirable.  The concept of the public welfare is broad and inclusive… The values it represents are spiritual as well as physical, aesthetic as well as monetary.  It is within the power of the legislature to determine that the community should be beautiful as well as healthy, spacious as well as clean, well-balanced as well as carefully patrolled.  In the present case, the Congress and its authorized agencies have made determinations that take into account a wide variety of values.  It is not for us to reappraise them.  if those who govern the District of Columbia decide that the Nation’s Capital should be beautiful as well as sanitary, there is nothing in the Fifth Amendment that stands in the way.

In Hawaii Housing Authority v. Midkiff, the Court considered a Hawaii statute whereby title was taken from land owners (lessors) and transferred to other private parties (lessees) in order to reduce the concentration of land ownership.  The Ninth Circuit (which ruled against the Hawaii Housing Authority) ruled that the taking was “a naked attempt on the part of the state of Hawaii to take the property of A and transfer it to B solely for B’s private use and benefit.”  The Supreme Court, in another unanimous decision, reversed the Ninth Circuit’s ruling, holding that the State’s purpose of eliminating the “social and economic evils of a land oligopoly” qualified as a valid public use.

In light of the Court’s ruling on those two cases, it is no surprise that the Court ruled in favor of the city of New London in the current case.  The Court rarely reverses itself, and the Kelo v. New London case is similar enough to those two and to others that the outcome isn’t a huge departure from other rulings.  It does place in question, though, the value of the takings clause in the Fifth Amendment.

I’m inclined to favor Justice Thomas’ opinion in his dissenting view: that the Court should strictly interpret the Fifth Amendment, that this case should have been decided in favor of the petitioners, and that the Berman and Midkiff decisions should no longer be used blindly as precedent when deciding similar cases in the future.

But nobody asked me.

Field Day 2005

Today and tomorrow mark ARRL Field Day 2005, the annual ham radio event where we all head out to remote locations, set up temporary stations, and see how many stations we can contact.  Okay, so it’s a 2-day geek fest for radio nerds.  We actually do benefit from it.  The idea is to simulate operating for an extended period in an emergency, using power other than from the commercial power lines.  Mostly that means gas powered generators.

There are many different categories of stations and many different ways to earn extra points beyond the points you get for making contacts.  There are points awarded for having an elected official visit the site, for having a press release published in the paper, making a contact via one of the amateur radio satellites, special modes, and natural power.  For the second year in a row our club’s natural power source was me riding a bicycle.

I wrote up last year’s experiment here.  That setup used a small Plymouth alternator to convert my pedaling into electrical power that was stored in a battery.  We charged the battery and used it to make our required five contacts to earn the 100 points for a natural power source.  There were two problems with that setup.  First, a lot of my pedaling effort was wasted exciting the field for the alternator.  Second, it just wasn’t very sexy.  Pointing to a battery and saying “I charged that by pedaling” doesn’t have quite the same effect as driving the radio directly from the bicycle generator.

This year my friend Steve Cowell (KI5YG) got hold of a 24 volt electric scooter motor and wired up a voltage regulator.  If you turn an electric motor, it acts like a generator.  We attached the motor to the bicycle using the same V-belt we used last year, put a 22,000 μf capacitor in-line to insulate me from the transmitter load, and attached the radio to the voltage regulator.  32 minutes of medium-hard pedaling later and we had our five contacts.  That was a whole lot easier than the hours and hours of pedaling I had to do last year in order to charge the battery.

Wiping a drive with DBAN

Tuesday I wrote a bit about securely erasing data from a hard drive and I mentioned Darik’s Boot and Nuke.  DBAN is a nifty little system that will write a bootable diskette or CD that you can use to completely (as much as reasonably possible) eliminate all traces of your data from a hard drive.  The image that it writes contains a pared-down bootable Linux system and the program that actually erases the data.  It’s all quite easy to use.

I used the free Eraser program to create the DBAN bootable disk and then popped that into my Devil Machine (a Celeron 666 lab machine) to test it out.  DBAN supports a number of different secure erasure techniques, ranging from very low to very high security.  The default is the DOD 5220.22-M method, which the program ranks as medium security.  At that level your data probably won’t hide from the FBI or the NSA, but your average identity thief or local law enforcement crime lab wouldn’t be able to do anything with it.  I tried to use the more secure Gutmann technique, but the program failed because it was unable to allocate enough memory.  I don’t know why, exactly, but I didn’t feel like futzing with it.  Besides, I’ve never used this lab machine for anything critical, so the chances of it containing anything personal or incriminating are vanishingly small.

It took DBAN right at two hours to make the seven passes required to securely erase my 100 GB drive.  I guess it would take 10 or more hours to complete the 37 passes of the Gutmann method?  I’ll try it on one of the other systems.

My only question now is how I prove that the thing actually worked?  I’m no dummy, but I have absolutely no way to verify that the program did what it claims to do.  I could download the source code and build my own version of the program to ensure that what I ran is indeed what the author wrote.  I’m even capable of understanding what the code does.  I could prove that it actually implements the secure erasure methods that it claims.

I could inspect data on the individual disk sectors, but all that will tell me is that the drive electronics can’t discern any meaningful data.  I don’t have the equipment or the knowledge to inspect the drive any other way.

I’m satisfied that what I downloaded works, and I’m not going to fret about it.  But this illustrates a fundamental truth about security.  At some point you have to trust somebody.  I’m smarter than your average computer user (about computers, anyway), able to read and understand source code and inspect disk sectors to see if any of my data remains in a normally readable.  But even I have to take somebody’s word for it that the method used by DBAN actually makes it difficult or impossible to reconstruct meaningful data from my hard drive.

In any case, I strongly recommend Eraser and DBAN.  Do yourself a favor and use them before you let go of a hard drive.

Entropy in source code control

One definition of entropy is, “Inevitable and steady deterioration of a system or society.”  We’ve all seen it: absent constant maintenance, systems move from a state of order to a state of disorder.  Weeds grow in the flower garden and your desk becomes cluttered and disorganized.  The same thing happens with software systems’ code.  At the start of a project everything is clean.  The directory structure is nicely laid out and the source code repository exactly mirrors the code on the disk.  Formal frequent (ideally, daily) build procedures ensure that the source control system remains in sync with what the developers are doing.

But at some point the project goes into “crunch” mode.  Either the team gets behind schedule or has to rush to get a bug fix out the door for a critical client or magazine review.  Maybe the product ships and a year later a lone developer hacks in some changes quickly and doesn’t follow all the formal procedures to maintain the fidelity of the source code control system.  At some point, the project gets out of sync.  Six years later another lone developer picks up the code and tries to puzzle it all out.

Source code control systems like Microsoft’s Visual Source Safe and the open source CVS (Concurrent Versions System) serve three primary purposes:

  1. They serve as a central repository for the project’s source code.
  2. They maintain a revision history so that it’s possible to retrieve all versions of the code and view every change made from the project’s inception up to the most recent version.
  3. They control access to the source code, ensuring that only authorized users can read or update the code base, and that changes are recorded in the proper order (i.e. that older versions don’t overwrite newer versions).

However, no version control system that I’ve used can prevent users from subverting it.  It’s possible to check in files that aren’t used in the project, and to use files without checking them in to the database.  Everything works fine until somebody decides to grab the latest version from the database and try to build the project.  There’s no way for the system to enforce the rules, and no way short of trying to build the project to prove that the rules have been followed.  Maintaining a project’s source integrity is requires active thought by the team members all the time.

Microsoft Visual Studio and Visual Studio .NET, and some other development environments have varying degrees of integration with version control systems.  These integrated systems work well as long as everybody follows the rules.  The problem is that the rules aren’t precisely defined, they’re hard to follow, and they’re absurdly easy to break unintentionally.

The only way to ensure that your project will build successfully at any time is to create and maintain a daily build procedure that gets the latest version of the code from the repository into a clean directory structure and builds the entire project.  Every programmer on the project is notified of the build status every day.  This technique has been proven many times over the years, and is recommended by any project management book or seminar produced in the last five years.  Martin Fowler calls it Continuous Integration.  For a more friendly discussion of the topic and links to other resources, see Joel Spolsky‘s Daily Builds are Your Friend.

I’d be willing to bet that almost all successful large software systems use a similar technique.  I’d also be willing to bet that most unsuccessful large systems can point to the lack of a daily build process as a major contributer to the project’s failure.

Here’s the kicker, though.  A daily build process will ensure that you can build your project, and automated testing can ensure that the built version actually works.  But there does not appear to be a way to ensure that all the rules are followed and that the project file remains in sync with version control.  It’s possible to add files to version control without adding them to the project file, and as long as your daily build procedure pulls down the entire source tree, you’ll never know it.  The only way you can ensure that the project and the source code control stay in sync is to manually open the project from the version control into a clean directory.  And that isn’t going to happen every day.  Or even every month.  Instant entropy.

Daily builds will keep your project on schedule.  Build early and build often.  But no automated tool is going to prevent entropy in your project’s structure.  That’s just the way it is.  It’s a dirty little secret that most programmers either don’t recognize or prefer not to discuss.

How to securely erase a hard drive

After two months with the laptops, it’s certain that our old desktop machines won’t be in daily use anymore.  I don’t know yet who’s going to get them, but three of the four old machines will be leaving the house soon.  They’re not much by today’s standards, but a 750 MHz Pentium III with 768 MB of RAM and an 80 GB hard drive would make a decent home file server or browser, email, and word processing machine.

Before I give the machines away, I want to make sure that all personal data is wiped from the drives.  That turns out to be a lot more difficult than you might think.

As you probably know, when you delete a file in Windows the data isn’t actually erased.  What Windows does is “move” it to the Recycle Bin by just changing the location of an index entry.  All the data remains on the disk.  Even if you tell Windows to delete the file rather than move it to the recycle bin, the data isn’t erased.  Only the index entry is deleted.  Windows will re-use the space taken by the file at some point, but there’s no guarantee when.  Somebody with just a little knowledge of disk formats can easily pull the data from the disk.

One solution to this problem is to overwrite the file with random data before deleting it.  In theory that will deter the casual snoop who knows a little bit about reading individual disk sectors from reading your files.  But it only works for files that you explicitly delete.  It won’t prevent the snoop from gathering data from backup files created by Word or other programs, or from reading the pieces of the Windows swap file that are scattered over your disk.  The swap file is especially insidious because it can contain information that you never actually saved to disk.  If Windows gets busy and needs to free up some RAM, it will write stuff from memory to disk.  That nasty letter you wrote to your boss but didn’t actually save might very well be floating around on your hard drive.

Deleting individual files is not secure enough.  To ensure that people can’t get data from you, you have to wipe the entire drive.  Some people say that formatting the drive is good enough.  But Windows maintains certain areas of the disk when it formats.  And even the areas that it overwrites aren’t as secure as you would think.

I don’t completely understand the physics of why, but when you write data to a location on disk, the previous data isn’t fully destroyed.  It’s almost child’s play, using equipment and software that’s fairly commonplace these days, to reconstruct the original data.  In fact, it’s possible to reconstruct (with decreasing levels of accuracy) several generations of data in a particular location.  For example, if you wrote an “E”, then over-wrote that with “B”, and then wrote over the same location again with an “X”, it’s quite likely that a skilled operator with good equipment would be able to reconstruct what you did.  Frightening, isn’t it?  Read Peter Gutmann’s Secure Deletion of Data from Magnetic and Solid-State Memory for a little better explanation.

The only positively sure way to securely remove data from the drive would be to destroy the drive.  Either grind the disk surface into dust, or melt it down.  Acid is more effective than burning, but it’s usually possible in either case to reconstruct some data.  But if you want to give away a computer with a working hard drive, how do you prevent people from getting at your old data?

The answer is found in a utility called Darik’s Boot and Nuke (DBAN), which makes several passes over your entire hard drive, writing specific patterns that are constructed to obscure the previous data.  The method used is described in Peter Gutman’s article linked above, and also in the National Industrial Security Program Operating Manual of the US Department of Defense (aka DOD directive 5220.22-M).  The basic idea is to make so many different generations of overwritten data that it’s virtually impossible to to reconstruct the last generation that had actual good data.  If downloading and using DBAN by itself seems daunting, download the free Eraser tool, which has an option to create the DBAN boot disk for you.

I haven’t actually used DBAN yet.  Give me a couple of days and I’ll let you know how it works.

Road Runner blocks mail server access

I responded to an email message the other day and got a failure notice in return.  It seems that Road Runner has blocked direct connections to its inbound mail servers from the residential dynamic space.  See Road Runner Mail Blocks for more information.

This isn’t a problem for most people, I know.  But I send mail through the SMTP server on my laptop rather than connect to Road Runner’s SMTP server.  The primary reason is so I can send mail when I’m traveling.  Unless I’m connected to Road Runner’s dynamic space, I don’t have access to their SMTP server.  I guess I could change the outbound server when I’m at home, but it’s always annoying to change it again whenever I go off network.

I’ve run into similar problems with other servers that treat my home-based server as “suspect.”  The solution there usually is to wait 30 minutes or so and try again.  Those servers are set up to give a non-permanent error to suspect servers on the first attempt, but to allow the message to go through on a retry.  The idea here is to discourage spambots that use a shotgun approach to email and don’t check return status.  I don’t know why Road Runner didn’t implement that technique instead of the iron curtain.

I could, of course, use my Web-based client when I’m on the road.  Except that I detest Web interfaces and it’s difficult then to get an archive of the sent messages back down to my laptop once I get home.

Would somebody please fix the email spam problem?  I’ve only been waiting for the last five years.