It surprises me how often I run across programming language elitists who express opinions such as those found in sizeof(char) is 1. The prevailing opinion here is that if you write extraneous code, it’s obvious that you don’t know enough about the language.
The author’s primary rant here is that the C standard defined sizeof(char)
as 1. So writing (sizeof(char) * StringLength)
is the same as writing (1 * StringLength)
, and that’s just silly. After all, everybody knows that 1 * x == x
. Just write (StringLength)
.
Whereas he’s right in saying that writing sizeof(char)
is unnecessary, his assertion that doing so shows a lack of understanding is complete and utter bullshit.
Computer programming is in large part an exercise in applying familiar patterns to solve unfamiliar problems. We benefit from using patterns, and the general pattern for using malloc
is malloc(size_of_thing_to_allocate * number_of_things_to_allocate)
. Why change that pattern just because you know that the size of the thing you’re allocating is 1? It’s not like it matters in terms of performance. The compiler is going to output the same code, regardless of which way you wrote it.
At the end of the article, the author points out that there are two forms of sizeof
. When the argument is a type, then the syntax is sizeof(type)
. When the argument is an expression, you can write sizeof expression
. That is, you can eliminate the parentheses. He goes on to say:
If you don’t need the parentheses then don’t use them, it shows that you know the syntax of
sizeof
and generates confidence that you know what you’re doing.
I suppose, then, that all extraneous parentheses should be removed from all constructs? After all, by removing the parentheses from this code:
if (((x == 22) && (y == 33)) || (z == 15))
You have:
if (x == 22 && y == 33 || z == 15)
Never mind that by removing the parentheses you’re losing contextual information that actually helps in understanding the code. Sure, the minimal version works just as well. But the first version groups the expressions and shows me explicitly what the code is doing. I don’t have to dredge the operator precedence chart out of my brain and apply it. I can look and know. The “extraneous” version takes less effort to decipher, and I’m more likely to understand it correctly. More importantly, I’m more likely to write it correctly the first time around.
The things that the language elitists complain about are legion. C programmers often write if (!count)
in place of if (count == 0)
. Why? Because it’s less code to type. They’re adamant that writing == 0
is something that only the uneducated masses would do. As one person put it, “every time you type ‘== 0
,’ God kills a puppy.” Idiots. Never mind that by writing if (!count)
, you’re implying that count
is a Boolean value. If you want to check to see if something is zero, write code that checks it against zero!
C#, of course, eliminated a lot of that madness. You can’t write if (!count)
when count
is an integer. That doesn’t stop the language elitists, though, from complaining if you write if (Char.IsDigit('0') == true)
. The problem? “== true"
is unnecessary. Perhaps it is. But it’s also completely unambiguous.
By the way, it’s amazing how many of those who “know everything” about the C language end up writing return (1);
, even though the parentheses are not required. The syntax is return expression. The parentheses are allowed, but not necessary.
Adopting patterns frees our minds to worry about applying the patterns to solving the problem rather than spending brain cycles thinking like a compiler so that we can write the tersest possible code. Asserting that somebody who writes verbose but readable and correct code is somehow less knowledgable only shows your ignorance of what’s important. Why spend brain cycles on irrelevant stuff like that?
I freely admit that I don’t have the C# precedence of operators table memorized. And you know what? I don’t have to memorize it or even consult it. Because I know that if I fully parenthesize my expressions I’ll never be wrong. I even use {
and }
to enclose single-statement blocks. Terribly inefficient of me, don’t you think? And when I was writing C, I did indeed write sizeof(char)
in my calls to malloc
.
The language elitists would have you think that doing those things makes one less of a programmer. I, on the other hand, think that it makes one a better programmer. Why? Because the programmer who adopts such patterns spends his time solving the problem at hand rather than trying to make his code show that he knows and understands every nook and cranny of the language specification. The kind of elitism espoused in the linked article and by others of the same ilk is fine for college students and hobbyist programmers. But those of us who are paid to solve problems don’t have time for that kind of bullshit.