Back in the bad old days of computer programming, before the advent of integrated development environments (IDEs) and source-level debuggers, the most common debugging method was code tracing—placing output statements in the code that would write the state of the program or the name of the current procedure to the console or to a log file of some sort. Placed correctly, those messages could give you good insight into your program’s operation. The tough part was knowing where to put the trace statements so that you had enough information in the output log, but not too much. It was a painful process, but it’s all we had at the time. Well, there were symbolic debuggers, but most COBOL programmers didn’t understand enough assembly language to use them effectively.
With the advent of source level debuggers the fine art of program tracing was mostly lost. Why dig through a log file to find out what your program is doing when you can just fire up the debugger and step through the code? Source debuggers are useful tools, but a log file gives you something that a debugger can’t—a permanent record. A log file can also be used (if you write your program to support it) to “replay” program execution. If you write your program so that it records all inputs in the log file, then it should be a simple matter to write code that will read input from the log file to duplicate a session. Combined with a source level debugger, the log file gives you a reliable way to duplicate most errors.
The lost art of program tracing is finding its way back to mainstream programming. Operating systems have always had event log files where major events were stored, but until recently there was little attention paid to trace information. The primary reason for renewed interest in tracing? Web services and other programs that have no real user interface and are intended to run unattended for months or years at a time. A log file is the only practical way to monitor such applications and diagnose problems.
The point? I find it interesting how many times an older technology is supplanted by something “better,” and then is re-discovered a generation later and touted as the hottest new thing. So very often replacing old technologies results in our losing something essential, but not realizing the loss until long after the old technology is mostly forgotten.