Stack Overflow is the best programmers’ resource to hit the Internet in quite some time. Online help forums for programmers are nothing new, but this one works better than anything else I’ve seen. I’m continually impressed by the variety and quality of the content there.
I’m also amazed by some of the questions. For example, this was posted today:
I work on my graduation thesis connected with image compression and I’m looking for algorithms, which use some mathematical methods (i.e. Discrete Cosine Transform) to achieve maximum compression ratio in minimum time and with minimal losses of quality.
Thank you in advance.
I find it difficult to believe that somebody who’s about to graduate from college doesn’t even know where to start researching his final project. As one commenter put it, “You really should be embarrassed that you’re asking for help googling for your graduation thesis.” His instructor should be embarrassed, too. In fact, the college should be embarrassed that they’re about to graduate a complete moron.
It really is amazing how many Stack Overflow questions can be answered by just typing the question into Google. For example, somebody asked today about using TBB for non-parallel tasks. The question had something to do with parallel processing, so out of curiousity I did a Google search for “TBB,” the first result of which was a link to Intel’s Thread Building Blocks library. Less than two minutes later, I had the answer. I guarantee that it took the person who asked longer to post the question than it took for me to find the answer, and I didn’t even know what TBB was!
Another one. Somebody asked how to force Windows to reboot into safe mode. I’d never needed to do that, so I didn’t know how. But a quick Google search turned up this duplicate question, which contains the answer to the question.
I’m unable to find any data that says what percentage of questions on Stack Overflow are closed as duplicates. It looks to me to be in the single digits, meaning that it probably isn’t a huge problem. Plus, duplication isn’t necessarily bad. As Jeff Atwood points out in Dr. Strangedupe: Or, How I Learned to Stop Worrying And Love Duplication, some duplication is okay. Search isn’t perfect, and it’s quite possible to ask semantically identical questions that are syntactically very different. But in many cases, including the two that I pointed out above, a quick Google search revealed the answer much more quickly than I would have obtained it by posting a question and waiting for somebody knowledgeable in that topic to respond.
Paco is from a pattern created by a wood carver named Javo Sinta. (Alternate link, which might work better: Javo Sinta Woodcarving.) This carving is five inches tall, including the base.
My version is carved from mesquite.
I cut this out on the band saw a few months ago and started carving it with a knife, but then got sidetracked. I also got a little frustrated because I made a mistake in the way I cut it out. Today I finally got some time to work with the new power carver that Debra bought me for my birthday, and thought I’d finish up Paco.
I made quite a few mistakes on this piece, but I figure it’s not too bad for my first time with the power carver. I’ll definitely have to make another one when I get better with this new tool.
More pictures in the gallery.
I finally finished the ornaments for the annual carving swap. This year I elected to participate in the smaller group of 12, rather than in the larger group (20 to 25). Here’s the whole batch:
Clicking on the picture will give you a much better (larger) view.
I think they’re all cute, but this one’s my favorite:
The carvings are all between two and two and a half inches tall. The wood is mesquite from the back yard. I did all of the carving with a knife (no power tools). The hat, eyes, nose, and tongue are painted with acrylics, and the entire carving is given a coat of orange oil and beeswax.
I hope the others who are participating in the swap enjoy them. I’m a little worried that I inadvertently joined the Santa Swap. So far I’ve received seven ornaments, and all of them are Santas. I suppose this could be a Santa Pup.
I’m setting up a computer that I can use to do some work from home. I’m still running Windows Server 2008 at the office because I’m slow to embrace change on my main development machine. But this new machine is going home and I want to explore what’s new in Windows.
I figured I’d skip Windows 7 and jump right to Windows 8. So we downloaded the Windows 8 Developer Preview. Then I hit a snag. The ISO file is 4.8 gigabytes in size. It requires a dual-layer DVD. We don’t have a drive capable of writing a dual-layer DVD.
At the bottom of the download page there’s a section titled, “How to install the Windows 8 Developer Preview from an ISO image.” In it, they mention the possibility of installing from a USB memory stick.
A quick trip to Fry’s got me an 8 gigabyte USB memory stick for less than $10. That’s step 1. The next step is getting a bootable image onto the thing.
Although Windows 7 can mount an ISO device image as a drive, Windows Server 2008 (and Windows Vista) don’t have that capability. However, there are third party tools that can do it. I downloaded and installed Virtual Clone Drive, and then told it to mount the Windows 8 ISO as my drive H. Then I followed these instructions to create a bootable image on the USB memory stick.
Be forewarned: it takes a very long time to copy 4.8 gigabytes to the USB stick. I don’t know quite how long. I started it last night and it was still running when I left for home about 30 minutes later. But it was done when I got in this morning.
With a loaded USB memory stick, all I had to do was tell the other machine’s BIOS to boot from the USB device. I restarted the computer and, like magic, the Windows 8 installer started reading files from the USB port.
I’m looking forward to playing with this new version of Windows. More when I know more . . .
A lot of interviewers will have a prospective developer do a code writing exercise in which the candidate is given a series of increasingly difficult small problems to solve in code. Often, this exercise is done on a white board, although with projectors and desktop sharing widely available now, many companies are moving to having the candidate compose at the keyboard.
I participated in this back when I was interviewing programmers, because it was expected. I always found the practice uncomfortable (more so when I was the interviewer rather than when I was the interviewee), and referred to it as “the white board inquisition.” Over the years, I’ve questioned the effectiveness of this interview technique.
We would typically start with something simple. For example, we’d ask a candidate to write the equivalent to the C
strlen function. This was kind of a warm up to get him accustomed to writing code on the board, to explain the rules of the test, and to see if he had even the slightest idea about programming. The code itself is very simple:
int strlen(char *s)
int count = 0;
for (p = s; *p != '\0'; ++p)
I found it surprising how many programmers couldn’t just whip that out, although I’m not sure where they had difficulty. It was obvious to me, even 15 years ago, that many of the candidates were very uncomfortable writing code on the white board, and I can understand why. I code by typing. The output of my “coding brain” is wired to my typing ability. I don’t write code, and I certainly don’t do it on the white board. I type code into my IDE. Furthermore, I’m highly dependent on my tools. Things like automatic indentation, Intellisense or other types of autocompletion, edit-time compilation to show errors, and other productivity enhancing features are essential for me to code effectively and efficiently. Even something as simple as
strlen is painful to type without an IDE, and excruciating on a white board.
strlen, we’d give a few more difficult problems. As I recall, reversing a string, determining if a string is a palindrome (which is really just the string reversal with a little twist), binary search, and a simple sort were among the exercises we’d give.
One of the more interesting things I found was that younger programmers would try to solve the binary search recursively, and the older programmers would supply an iterative solution. Almost every recursive attempt failed.
Of all the candidates I interviewed, one aced the white board inquisition. That person knew his computer science and could regurgitate code on the white board at an astounding rate. I’m pretty sure he could have reproduced a working balanced binary tree implementation in a few minutes. He couldn’t write a working program from a design specification, though, as we found out after we hired him. He knew how to implement common algorithms and data structures, but he had no concept of how to put those pieces together in order to make a useful program.
We also hired a few people who couldn’t do much more than
strlen, but who did very well in other aspects of the interview. With one exception, those people turned out to be much better programmers than the ones who did well on the inquisition. The reason, I think, is because although they couldn’t write a binary search on the white board, they knew how and when to use the library-supplied binary search function. They could interpret a design specification and come up with a working program. They knew the libraries and how to use them.
In my experience, anything beyond a few very simple exercises is a waste of time. The white board inquisition could tell me if the candidate understood the fundamental mechanics of writing code, but beyond that didn’t give any indication if he could write a real, working program.
So absolutely do something like FizzBuzz on the white board to weed out the candidates who can’t write code at all, but if you want to get an idea of a programmer’s real abilities, come up with a more complete exercise (something that can be done in an hour or so), sit the programmer down at a computer that has Visual Studio or whatever development environment you expect him to be working in, and ask him to solve the problem. That will tell you if he can use the tools, if he understands the libraries, and if he can solve a real problem rather than provide a solution to some annoyingly tedious puzzle.
A recent question on Stack Overflow reads:
Working with legacy code, I’ve found a lot of statements (more than 500) like this:
bool isAEqualsB = (a == b) ? true : false;
Does it make any sense to rewrite it like this?
bool isAEqualsB = (a == b)
Or will it be optimized?
When you see a question like this, you have to wonder what’s going through this programmer’s mind. There are several things wrong with the thought process that led to this question.
First, this is legacy code, which usually means that it’s older code written by somebody who is no longer around and that nobody else understands. It’s orphaned code. Most likely it’s also working code. The new programmer assigned to the code is either tasked with making a modification, or he’s just trying to understand it.
Second, the programmer is worried about optimizing something that he has not shown to be a bottleneck. Even if neither the C# compiler nor the JIT compiler optimizes the code, it’s highly unlikely that making the proposed change will have a significant effect on the program’s performance. This is especially true if the variables being compared are not primitive types. If they’re strings or other reference types (or value types that have an overloaded
Equals operator), then the function call to do the comparison will overwhelm any possible performance gain from restructuring the code.
Making changes to working code–especially code that you don’t understand–is a risky business. Sure, this looks like a straightforward modification, although I doubt that the actual variable names are
b. You might even be able to create an editor macro that will make the change globally in the project. But it still takes time to develop the macro, run it, make sure it compiles, and test the code to ensure that it still works. And the risk of making an error along the way, although small, is non-zero.
There is a small readability benefit to be had. The second form of the expression above is easier to read than the first. However, that benefit does not offset the time and risk associated with making the change. The time you put into making this change will never be repaid–not in readability and not in performance. It’s a waste of time.
It’s interesting to note that the C# compiler does not optimize the first expression. That is, the IL from the first expression is:
IL_0002: beq.s IL_0007
IL_0005: br.s IL_0008
And for the second expression:
Looking at the second block of code, you might think that the evaluation doesn’t require any branches. Whereas it’s true that there are no branching instructions in the generated IL, there is a branch by the time you get to the native code. It’s hidden in the ceq instruction.
Boolean type (the C#
bool is just an alias) is an 8-bit value that stores 1 for
true and 0 for
false. This is different from C, in which any non-zero value is considered to be
true. When the
ceq instruction is compiled, it generates assembly language code that’s equivalent to:
mov al, 0
mov al, 1
The JIT compiler probably generates a slightly more efficient version of the code than that, but there is a branch. There has to be.
Lessons to learn from this:
- Restrict your optimization efforts to things you know are performance bottlenecks.
- Modifying code is a time consuming and risky business. Be sure that your modifications are worth the cost of making them.