The latest meme stock: DJT

During the meteoric rise of BitCoin in 2017, I wrote the following:


(Originally published November 28, 2017)

Gold Bitcoin Beanie Baby Bulbs

So, yeah, I’m not the first person to point out the parallels between the recent Bitcoin frenzy and the Dutch tulip mania of the 1630s. Nor, I suspect, am I the first to mention that Bitcoin’s meteoric rise bears shocking resemblance to:

I wasn’t around for the first of those, but I saw the others happen. I even lost a large part of my meager savings in the 1980 gold frenzy. Every one of these events saw people betting their futures on a “sure thing” that “can’t lose.” They were putting their entire life savings into it, borrowing heavily to gamble on a speculative market that seemed like it would just keep going up. And in every case, the bubble burst, wiping out savings in a very short period.

Those bubbles burst because investors flocking to the “can’t lose” scheme drove the prices to levels that were unsustainable. Early investors get in, ride the rise for a while, and then sell to new investors who want the same kind of trip. It becomes a positive feedback loop, until the price becomes too high to attract new investors. One day, somebody wants to get off and discovers that nobody wants to pay what he’s asking for his position. He decides to sell at a loss, at which point everybody else starts to panic and starts unloading as fast as they can, hoping to get out with something.

I don’t know enough about Bitcoin and other crypto currencies to say what, if anything, they’re actually worth, or if the idea of crypto currency has any long-term merit. But the the meteoric increase in Bitcoin prices over the last year, from $750 to $10,000, brings to mind those parallels, and a little bit more research reveals all the signs of a speculative bubble. The number of companies specializing in crypto currency trading has grown tremendously over the past year. There are “network marketing” schemes that pay you for “helping” others get in on the deal. New crypto currencies are popping up. People are borrowing money to invest. People are posting cheerleader messages (“Rah, rah Bitcoin!”) on social media. I’m seeing more hockey stick charts every day. “Look what it’s done in just the last three months!”

There may indeed be some lasting value to Bitcoin and other crypto currencies, just as there was lasting value in Beanie Babies. I saw some at a yard sale last week, going for about 50 cents each.


Proving once again that people looking for a quick buck never pay attention to past mistakes, we have invented “meme stocks,” the most memorable of which was GameStop. But I think the newest one, the social media company called Trump Media and Technology Group, will eclipse even that. This is the company that supporters of Donald Trump created after he lost the 2020 election and got kicked off of Facebook and Twitter for his actions. The new site, Truth Social, promising “no censorship,” was designed to prominently feature the insane ramblings of the Bumbling Buffoon, and users who contradicted his incoherent missives or said negative things about him were banned from the site.

The idea was always to create the site, create a shell company (a SPAC — special purpose acquisition company) to acquire it and take it public. The site of course had some trouble getting started and even today is mostly a joke. But they succeeded, after a lot of investigation by the SEC and others, in taking the company public. At a ridiculously inflated valuation and with a Trump-typical ticker symbol: DJT. I think the last public company the Buffoon in Chief formed was called TRMP. No surprise, it failed. But not before Trump bilked his investors out of their cash. I’m surprised he managed to escape that one without any criminal penalties.

Anyway, an “investment” in this new company is nothing more than a gamble. And not a very smart one at that. The company lost $49 million in the first nine months of 2023. Its total revenue–every dollar it took in–was $3.4 million. And yet the company is valued, based on the price of its stock and the number of shares outstanding, at something like $7 billion!

Yes, that’s right: the company’s market cap is 2,000 times its revenue, and the company is bleeding cash. Furthermore, the single largest shareholder is the Bumbling Buffoon himself, a person who has a history of taking “investors'” money, siphoning off enough to repay his own contribution, and running the company into the ground. At current prices, Trump stands to gain something more than $5 billion if the company lasts long enough and the stock price remains at its ridiculously inflated valuation. He can’t cash in immediately, though: there’s a holding period on his stock.

You couldn’t convince me to invest a single cent in any venture that’s associated in any way with Donald Trump, and even if the Prevaricator in Chief weren’t involved you couldn’t convince me to invest in a company that’s operating at a loss and is valued at 2,000 times its total revenue. In a rational world, the company’s stock would be trading at just that: pennies per share. What a scam.

That’s not what negative feedback means

On January 30, NPR’s All Things Considered ran a segment called How The Trump Administration’s Tariffs On China Have Affected American Companies. In it, NPR’s Ari Shapiro was asking questions of Bloomberg reporter Andrew Mayeda. There was this exchange.

SHAPIRO: When you look at the scale of the impact, is this more along the lines of an annoyance and inconvenience, or is it a real economic impact, something that could lead to slower economic growth, maybe even a recession down the line? How severe is it?

MAYEDA: If you actually look at the big-picture forecasts of the impact – for example, the IMF says that if we have a worst-case trade scenario, the global economy is going to be less than 1 percent smaller than what it otherwise would’ve been. That is not catastrophic. I think what people are concerned about is that there’s some type of confidence shock. That is to say that, you know, businesses start investing less. Consumers start spending less. And it gets into this negative feedback loop where reducing confidence leads to slower growth.

Hate to tell you this, Mr. Mayeda, but what you describe here is a positive feedback loop.

Negative feedback occurs when some function of the output of a system, process, or mechanism is fed back in a manner than tends to reduce the fluctuations in the output.” Furthermore:

Whereas positive feedback tends to lead to instability via exponential growth, oscillation or chaotic behavior, negative feedback generally promotes stability. Negative feedback tends to promote a settling to equilibrium, and reduces the effects of perturbations. Negative feedback loops in which just the right amount of correction is applied with optimum timing can be very stable, accurate, and responsive.

Sadly, this isn’t the only instance I’ve encountered recently of reporters misusing the term. In all cases, reporters are characterizing the feedback as negative or positive based on the outcome. To them, if the outcome is bad, then it’s a negative feedback loop. If the outcome is good, then it’s a positive feedback loop. That’s not the way it works.

Negative feedback loops typically lead to stable systems: generally a positive result. A positive feedback loop tends to result in a system fluctuating wildly or going completely out of control: a generally negative result.

Reporters, please do a little research before using terms that you’re not familiar with. It’ll save you a lot of embarrassment, and prevent you from confusing people.

The Trump-McConnell shutdown

As the current government shutdown approaches a month, little has changed. Democrats blame President Trump for the impasse, Trump and his supporters like to blame Democrats even though the president himself said, “I will shut it down,” and pretty much everybody else blames general government dysfunction.

The truth, though, is that there’s a third party involved:  Senate Majority Leader and oathbreaker extraordinaire, Mitch McConnell, who is once more derelict in his duty.

Our system of passing legislation is supposed to be pretty simple. For legislation that involves funding the government, the House passes a bill and sends it to the Senate. The Senate debates that bill and either passes or rejects it. If it’s rejected, typically there is a conference committee in which members of the House and Senate work out disagreements. Eventually, either the bill passes or is finally rejected. If the bill passes both houses of Congress then it goes to the president, who has two choices: 1) Sign the bill, making it law; 2) Veto the bill. In the case of #2, Congress can elect to override the president’s action by a two-thirds vote. If two-thirds of both houses of Congress vote for the bill after the president has vetoed it, the bill becomes law.

If Congress were to pass a spending bill and send it to the president, Donald Trump will have to take action: sign the bill or veto it. Neither of those actions would be beneficial to the president.

If President Trump were to veto the bill, then he would have to take full responsibility for the shutdown. He could no longer blame the situation on Democrats. It would be the same as standing up and saying, “I believe that it is more important to fund my wall than it is for government to function normally.” Plus, there’s a slight possibility that two-thirds of Congress would vote to override his veto, making him look like a fool. Whereas many people think that ship has sailed, if Congress were to override his veto even Trump would see himself as a fool. And weak.

If the president were to sign the bill, something I can’t see happening, all his bluster over the last month or so would look foolish. He’d be excoriated for “caving,” his detractors would ridicule him to no end, and his base would probably condemn him as a traitor.

In short, there is no way President Trump comes out looking good if Congress presents him with legislation that doesn’t fund his border wall.

Make no mistake, Trump painted himself into this corner. He made an ultimatum, fully expecting Congressional Democrats to cave. They didn’t, and now he’s in a tough spot. The only way he can win is if the Democrat-controlled House agrees to fund his wall, and there is almost no incentive for them to do so. As much as he tries, Trump can’t deflect responsibility for the shutdown that he instigated, and the longer it drags on, the more people blame the current situation on him.

So what does this have to do with Mitch McConnell? As Senate Majority Leader, Mitch McConnel has absolute control over what legislation is debated in the Senate. Nothing gets heard without his approval. And for reasons I cannot fathom, Mitch McConnell has made himself Donald Trump’s protector. In 2016, McConnell prevented the Senate from holding hearings on President Obama’s Supreme Court nominee. In doing so, McConnell violated his Oath of Office which says, in part, “and that I will well and faithfully discharge the duties of the office on which I am about to enter.” McConnell very publicly refused to do his duty. There is no interpretation of that Oath that allows him to refuse just because it would be politically inconvenient.

In the current situation, McConnell knows that the Senate might just pass a bill that does not fund the wall, putting the president in a no-win situation. So he just doesn’t allow the Senate to debate the bill. I guess it doesn’t cost McConnell anything to do this: he’s already shredded his integrity. My only question here is why the rest of the senators don’t kick him to the curb. Allowing Mitch McConnell, a man who wouldn’t know integrity if it jumped up and slapped him across the face, to represent the United States Senate makes them all look bad.

I suppose I do have one other question: What power does Donald Trump hold over Mitch McConnell to make him act this way?

The president would have you believe that he’s fighting the good fight in the name of National Security. It’s all a smoke screen to hide the fact that he is vulnerable and fully dependent on an unscrupulous Senate Majority Leader. Trump’s supporters, even the few who know the truth of his powerlessness, eat it up and will continue to do so as long as he keeps up the bluster. Democrats are going to hate him regardless, and independents have long ago dismissed him as a fool. It costs the president nothing unless the Senate replaces McConnell with somebody worthy of the title so that Congress can get back to doing its job. Or unless McConnell somehow gets a better offer. Expecting him to find the shreds of his discarded integrity is laughably naive.

The shutdown will go on until one of these things happens, in order by increasing likelihood:

  1. Trump gives up.
  2. McConnell allows the Senate to debate a spending bill that does not fund Trump’s wall.
  3. The Senate kicks McConnell to the curb.
  4. The House passes a bill that partially funds Trump’s border wall.
  5. Trump, like he’s done so many times in the past with other things, conveniently forgets about the shutdown and finds some way to spin things for his supporters.

I consider the last two to be almost equally likely.

Whatever the case, Trump doesn’t win here. With the first three options, he’s revealed as a weak fool. With the fourth, he retains some dignity, but will have to swallow some pride because he didn’t get exactly what he wanted. The fifth option is a defeat and he loses some supporters, but his inability to admit defeat protects his fragile ego. To him, it’ll be as though the shutdown never happened.

Setting JVM memory limits

Java has an option, -XX:MaxRAMFraction=XXX, that lets developers specify the maximum amount of available RAM a Java application should occupy. This sounds like a great idea, until you realize how it’s implemented. The number that you specify for “XXX” is an integer that represents the fraction of total RAM that should be used. The intrepretation is 1/XXX. So -XX:MaxRamFraction=1 tells the application to use as much RAM as it wants. XX:MaxRAMFraction=2 tells it to use half of the memory. What if you want to use 75% of the memory? Or, perhaps 2/3 the memory? Tough luck. You get all, one-half, one-third, one-fourth, one-fifth, etc. You want to use 40%? Sorry, buddy. You can get 50% or 33%. Take your pick.

When I first heard this, I thought the guy who was explaining it was misinformed or pulling my leg.

I’d sure like to understand the thinking that went into this design decision, because from where I’m standing it looks like lunacy. I can’t imagine how such an idiotic design could make it through review and implementation without somebody with a modicum of intelligence and authority raising a stink. You mean nobody realized how stupid this is?

Java 10 introduced the -XX:MaxRAMPercentage and associated flags. See https://bugs.openjdk.java.net/browse/JDK-8186315 for more information.

Sanity prevails, but I still want to understand how the original design got approved. That something so obviously wrong managed to make it through the approval process and was released on an unsuspecting world doesn’t give me much confidence in the competence of Java platform developers.

Gold Bitcoin Beanie Baby Bulbs

So, yeah, I’m not the first person to point out the parallels between the recent Bitcoin frenzy and the Dutch tulip mania of the 1630s. Nor, I suspect, am I the first to mention that Bitcoin’s meteoric rise bears shocking resemblance to:

I wasn’t around for the first of those, but I saw the others happen. I even lost a large part of my meager savings in the 1980 gold frenzy. Every one of these events saw people betting their futures on a “sure thing” that “can’t lose.” They were putting their entire life savings into it, borrowing heavily to gamble on a speculative market that seemed like it would just keep going up. And in every case, the bubble burst, wiping out savings in a very short period.

Those bubbles burst because investors flocking to the “can’t lose” scheme drove the prices to levels that were unsustainable. Early investors get in, ride the rise for a while, and then sell to new investors who want the same kind of trip. It becomes a positive feedback loop, until the price becomes too high to attract new investors. One day, somebody wants to get off and discovers that nobody wants to pay what he’s asking for his position. He decides to sell at a loss, at which point everybody else starts to panic and starts unloading as fast as they can, hoping to get out with something.

I don’t know enough about Bitcoin and other crypto currencies to say what, if anything, they’re actually worth, or if the idea of crypto currency has any long-term merit. But the meteoric increase in Bitcoin prices over the last year, from $750 to $10,000, brings to mind those parallels, and a little bit more research reveals all the signs of a speculative bubble. The number of companies specializing in crypto currency trading has grown tremendously over the past year. There are “network marketing” schemes that pay you for “helping” others get in on the deal. New crypto currencies are popping up. People are borrowing money to invest. People are posting cheerleader messages (“Rah, rah Bitcoin!”) on social media. I’m seeing more hockey stick charts every day. “Look what it’s done in just the last three months!”

There may indeed be some lasting value to Bitcoin and other crypto currencies, just as there was lasting value in Beanie Babies. I saw some at a yard sale last week, going for about 50 cents each.

How did this happen?

Last time I showed two different implementations of the naive method for generating unique coupon codes. The traditional method does this:

    do
    {
        id = pick random number
    } until id not in used numbers
    code = generate code from id

The other is slightly different:

    do
    {
        id = pick random number
        code = generate code from id
    } until code not in used codes

Before we start, understand that I strongly discourage you from actually implementing such code. The naive selection algorithm performs very poorly as the number of items you choose increases. But the seemingly small difference in these two implementations allows me to illustrate why I think that the second version is broken in a fundamental way.

Think about what we’re doing here. We’re obtaining a number by which we are going to identify something. Then we’re generating a string to express that number. It just so happens that in the code I’ve been discussing these past few entries, the expression is a six-digit, base-31 value.

So why are we saving the display version of the number? If we were going to display the number in decimal, there’d be no question: we’d save the binary value. After all, the user interface already knows how to display a binary value in decimal. Even if we were going to display the number in hexadecimal, octal, or binary, we’d store the binary value. We certainly wouldn’t store the string “3735928559”, “11011110101011011011111011101111”, or “DEADBEEF”. So, again, why store the string “26BB62” instead of the the 32-bit value 33,554,432?

I can’t give you a good reason to do that. I can, however, give you reasons not to.

  1. Storing a six-character string in the database takes more space than storing a 32-bit integer.
  2. Everything you do with with a six-character code string takes longer than if you were working with an integer.
  3. Display logic should be part of the user interface rather than part of the database schema.
  4. Changing your coding scheme becomes much more difficult.

The only argument I can come up with for storing codes rather than integer identifiers is that with the former, there’s no conversion necessary once the code is generated. Whereas that’s true, it doesn’t hold up under scrutiny. Again, nobody would store a binary string just because the user interface wants to display the number in binary. It’s a simple number base conversion, for crying out loud!

If you’re generating unique integer values for database objects, then let the database treat them as integers. Let everybody treat them as integers. Except the users. “26BB62” is a whole lot easier to read and to type than is “33554432”.

I’ll be charitable and say that whoever came up with the idea of storing the display representation was inexperienced. I’m much less forgiving of the person who approved the design. The combination of the naive selection algorithm, storing the generated coupon code rather than the corresponding integer, and the broken base conversion function smacks of an inexperienced programmer working without adult supervision. Or perhaps an inexperienced programmer working with incompetent adult supervision, which amounts to the same thing.

The mere existence of the naive selection algorithm in this code is a sure sign that whoever wrote the code either never studied computer science, or slept through the part of his algorithms class where they studied random selection and the birthday paradox (also, birthday problem). The comment, “There is a tiny chance that a generated code may collide with an existing code,” tells us all we need to know of how much whoever implemented the code understood about the asymptotic behavior of this algorithm. Unless your definition of “tiny” includes a 10% chance after only one percent of the codes have been generated.

The decision to store the generated code string surprises me. You’d think that a DBA would want to conserve space, processor cycles, and data transfer size. Certainly every DBA I’ve ever worked with would prefer to store and index a 32-bit integer rather than a six-character string.

The way in which the base conversion function is broken is hard evidence that whoever modified it was definitely not an experienced programmer. I suspect strongly that the person who wrote the original function knew what he was doing, but whoever came along later and modified it was totally out of his depth. That’s the only way I can explain how the function ended up the way it did.

Finally, that all this was implemented on the database looks to me like a case of seeing the world through database-colored glasses. Sure, one can implement an obfuscated unique key generator in T-SQL. Whether one should is another matter entirely. Whoever did it in this case shouldn’t have tried, because he wasn’t up to the task.

However this design came to exist, it should not have been approved. If there was any oversight at all, it should have been rejected. That it was implemented in the first place is surprising. That it was approved and put into production borders on the criminal. Either there was no oversight, or the person reviewing the implementation was totally incompetent.

With the code buried three layers deep in a database stored procedure, it was pretty safe from prying eyes. So the bug remained hidden for a couple of years. Until a crisis occurred: a duplicate code was generated. Yes, I learned that the original implementation didn’t even take into account the chance of a duplicate. Apparently somebody figured that with 887 million possible codes, the chance of getting a duplicate in the first few million was so remote as to be ignored. Little did they know that the chance of getting a duplicate within the first 35,000 codes is 50%.

I also happen to know that at this point there was oversight by a senior programmer who did understand the asymptotic behavior of the naive algorithm, and yet approved “fixing” the problem by implementing the duplicate check. He selected that quick fix rather than replacing the naive algorithm with the more robust algorithm that was proposed: one that does not suffer from the same pathological behavior. The combination of arrogance and stupidity involved in that decision boggles the mind.

The history of this bug is rooted in company culture, where the database is considered sacrosanct: the domain of DBAs, no programmers allowed, thank you very much. And although programmers have access to all of the source code, they aren’t exactly encouraged to look at parts of the system outside of their own areas. Worse, they’re actively discouraged from making suggestions for improvement.

In such an environment, it’s little surprise that the horribly broken unique key generation algorithm survives.

So much for that. Next time we’ll start looking at good ways to generate unique, “random-looking” keys.

Welcome to the cesspool

Today President Trump signed an executive order titled REDUCING REGULATION AND CONTROLLING REGULATORY COSTS. This fulfills a campaign pledge to reduce burdensome regulation. On the face of it, I applaud the measure, especially the provision that says, “whenever an executive department or agency publicly proposes for notice and comment or otherwise promulgates a new regulation, it shall identify at least two existing regulations to be repealed.”

I suspect that the net effect of this order will be approximately zero, as far as business is concerned. In the first place, there are so many exceptions listed, it’s likely that any department head with a modicum of intelligence will get his proposals exempted from the new rules. And in the unlikely event that they do have to identify regulations to be repealed, they have a plethora of idiotic rules that are on the books and no longer enforced. It’ll take them years to clear that cruft from the books. So at best what we’ll see is large numbers of unenforced or unenforceable regulations being repealed. A Good Thing, no doubt, but not something that businesses will notice.

The Director of the Office of Management and Budget is tasked with specifying

“… processes for standardizing the measurement and estimation of regulatory costs; standards for determining what qualifies as new and offsetting regulations; standards for determining the costs of existing regulations that are considered for elimination; processes for accounting for costs in different fiscal years; methods to oversee the issuance of rules with costs offset by savings at different times or different agencies; and emergencies and other circumstances that might justify individual waivers of the requirements …”

It looks to me like the president’s order creates more regulations and more work, which probably will require increased expenses. I wonder if his order is subject to the new 1-for-2 rule.

I mentioned exceptions above. The order exempts:

  • regulations issued with respect to a military, national security, or foreign affairs function of the United States
  • regulations related to agency organization, management, or personnel
  • any other category of regulations exempted by the Director (of the OMB)

A savvy department head can probably make a good argument that any new regulation fits one of the first two. Failing that, being “in” with the Director of OMB will likely get you a pass.

Also, Section 5 states that the order will not “impair or otherwise affect”

  • the authority granted by law to an executive department or agency, or the head thereof
  • the functions of the Director relating to budgetary, administrative, or legislative proposals

Oh, and the last part, Section 5(c) says:

“This order is not intended to, and does not, create any right or benefit, substantive or procedural, enforceable at law or in equity by any party against the United States, its departments, agencies, or entities, its officers, employees, or agents, or any other person.”

In other words, this isn’t Law, but rather the president’s instructions to his subordinates.

The dog can’t bite; it can hardly growl. But the president can say that he “did something about the problem,” and thus get marks for keeping a campaign pledge.

So much for draining the swamp. This is the way things have been done in Washington for decades. Make a big deal signing a regulation with a feel-good title, but that does nothing (or, worse, does exactly the opposite of what you would expect), bask in the praise of your supporters, and then go about business as usual.

Welcome to the cesspool, Mr. President.

It’s all about context

The C# using directive and implicitly typed local variables (i.e. using var) are Good Things whose use should be encouraged in C# programs, not prohibited or severely limited. Used correctly (and it’s nearly impossible to use them incorrectly), they reduce noise and improve understanding, leading to better, more maintainable code. Limiting or prohibiting their use causes clutter, wastes programmers’ time, and leads to programmer dissatisfaction.

I’m actually surprised that I find it necessary to write the above as though it’s some new revelation, when in fact the vast majority of the C# development community agrees with it. Alas, there is at least one shop–the only one I’ve ever encountered, and I’ve worked with a lot of C# development houses–that severely limits the use of those two essential language features.

Consider this small C# program that generates a randomized list of numbers from 0 through 99, and then does something with those numbers:

    namespace MyProgram
    {
        public class Program
        {
            static public void Main()
            {
                System.Collections.Generic.List<int> myList = new System.Collections.Generic.List<int>();
                System.Random rnd = new System.Random();
                for (int i = 0; i < 100; ++i)
                {
                    myList.Add(rnd.Next(100));
                }
                System.Collections.Generic.List<int> newList = RandomizeList(myList, rnd);

                // now do something with the randomized list
            }
            
            static private System.Collections.Generic.List<int> RandomizeList(
                System.Collections.Generic.List<int> theList,
                System.Random rnd)
            {
                System.Collections.Generic.List<int> newList = new System.Collections.Generic.List<int>(theList);
                for (int i = theList.Count-1; i > 0; --i)
                {
                    int r = rnd.Next(i+1);
                    int temp = newList[r];
                    newList[r] = newList[i];
                    newList[i] = temp;
                }
                return newList;
            }
        }
    }

I know that’s not the most efficient code, but runtime efficiency is not really the point here. Bear with me.

Now imagine if I were telling you a story about an experience I shared with my friends Joe Green and Bob Smith:

Yesterday, I went to Jose’s Mexican Restaurant with my friends Joe Green and Bob Smith. After the hostess seated us, Joe Green ordered a Mexican Martini, Bob Smith ordered a Margarita, and I ordered a Negra Modelo. For dinner, Joe Green had the enchilada special, Bob Smith had Jose’s Taco Platter . . .

Wouldn’t you get annoyed if, throughout the story, I kept referring to my friends by their full names? How about if I referred to the restaurant multiple times as “Jose’s Mexican Restaurant” rather than shortening it to “Jose’s” after the first mention?

The first sentence establishes context: who I was with and where we were. If I then referred to my friends as “Joe” and “Bob,” there would be no ambiguity. If I were to write 50 pages about our experience at the restaurant, nobody would get confused if I never mentioned my friends’ last names after the first sentence. There could be ambiguity if my friends were Joe Smith and Joe Green, but even then I could finesse it so that I didn’t always have to supply their full names.

Establishing context is a requirement for effective communication. But once established, there is no need to re-establish it if nothing changes. If there’s only one Joe, then I don’t have to keep telling you which Joe I’m referring to. Doing so interferes with communication because it introduces noise into the system, reducing the signal-to-noise ratio.

If you’re familiar with C#, most likely the first thing that jumps out at you in the code sample above is the excessive use of full namespace qualification: all those repetitions of System.Collections.Generic.List that clutter the code and make it more difficult to read and understand.

Fortunately, the C# using directive allows us to establish context, thereby reducing the amount of noise in the signal:

    using System;
    using System.Collections.Generic;
    namespace MyProgram
    {
        public class Program
        {
            static public void Main()
            {
                List<int> myList = new List<int>();
                Random rnd = new Random();
                for (int i = 0; i < 100; ++i)
                {
                    myList.Add(rnd.Next(100));
                }
                List<int> newList = RandomizeList(myList, rnd);

                // now do something with the randomized list
            }
            
            static private List<int> RandomizeList(
                List<int> theList,
                Random rnd)
            {
                List<int> newList = new List<int>(theList);
                for (int i = theList.Count-1; i > 0; --i)
                {
                    int r = rnd.Next(i+1);
                    int temp = newList[r];
                    newList[r] = newList[i];
                    newList[i] = temp;
                }
                return newList;
            }
        }
    }

That’s a whole easier to read because you don’t have to parse the full type name to find the part that’s important. Although it might not make much of a difference in this small program, it makes a huge difference when you’re looking at code that uses a lot of objects, all of whose type names begin with MyCompany.MyApplication.MySubsystem.MyArea.

The use of using is ubiquitous in C# code. Every significant Microsoft example uses it. Every bit of open source code I’ve ever seen uses it. Every C# project I’ve ever worked on uses it. I wouldn’t think of not using it. Nearly every C# programmer I’ve ever met, traded email with, or seen post on StackOverflow and other programming forums uses it. Even the very few who don’t generally use it, do bend the rules, usually when LINQ is involved, and for extension methods in general.

I find it curious that extension methods are excepted from this rule. I’ve seen extension methods cause a whole lot more programmer confusion than the using directive ever has. Eliminating most in-house created extension methods would actually improve the code I’ve seen in most C# development shops.

The arguments against employing the using directive mostly boil down to, “but I can’t look at a code snippet and know immediately what the fully qualified name is.” And I agree: somebody who is unfamiliar with the C# base class library or with the libraries being used in the current project will not have the context. But it’s pretty unlikely that a company will hire an inexperienced C# programmer for a mid-level job, and anybody the company hires will be unfamiliar with the project’s class layout. In either case, a new guy might find the full qualification useful for a week or two. After that, he’s going to be annoyed by all the extra typing, and having to wade through noise in order to find what he’s looking for.

As with having friends named Joe Green and Joe Smith, there is potential for ambiguity if you have identically named classes in separate namespaces. For example, you might have an Employee class in your business layer and an Employee class in your persistence layer. But if your code is written rationally, there will be very few source files in which you refer to both. And in those, you can either revert to fully-qualified type names, or you can use namespace aliases:

    using BusinessEmployee = MyCompany.MyProject.BusinessLogic.Employee;
    using PersistenceEmployee = MyCompany.MyProject.Persistence.Employee;

Neither is perfect, but either one is preferable to mandating full type name specification in the entire project.

The code example still has unnecessary redundancy in it that impedes understanding. Consider this declaration:

    List<int> myList = new List<int>();

That’s like saying, “This variable called ‘myList’ is a list of integers, and I’m going to initialize it as an empty list of integers.” You can say the same thing more succinctly:

    var myList = new List<int>();

That eliminates the redundancy without reducing understanding.

There is some minor debate in the community about using var (i.e. implicit typing) in other situations, such as:

    var newList = RandomizeList(myList, rnd);

Here, it’s not plainly obvious what newList is, because we don’t know what RandomizeList returns. The compiler knows, of course, so the code isn’t ambiguous, but we mere humans can’t see it immediately. However, if you’re using Visual Studio or any other modern IDE, you can hover your mouse over the call to RandomizeList, and a tooltip will appear, showing you the return type. And if you’re not using a modern IDE to write your C# code, you have a whole host of other problems that are way more pressing than whether or not a quick glance at the code will reveal a function’s return type.

I’ve heard people say, “I use var whenever the type is obvious, or when it’s necessary.” The “necessary” part is a smokescreen. What they really mean is, “whenever I call a LINQ method.” That is:

    var oddNumbers = myList.Select(i => (i%2) != 0);

The truth is, they could easily have written:

    IEnumerable<int> oddNumbers = . . .

The only time var is absolutely necessary is when you’re working with anonymous types. For example, this code that creates a type that contains the index and the value:

    var oddNumbers = myList.Select((i, val) => new {Index = i, Value = val});

I used to be in the “only with LINQ” camp, by the way. But after a while I realized that most of the time the return type was perfectly obvious from how the result was used, and in the few times it wasn’t, a quick mouse hover revealed it. I’m now firmly in the “use implicit typing wherever it’s allowed” camp, and my coding has improved as a result. With the occasional exception of code snippets taken out of context, I’ve never encountered a case in which using type inference made code harder to understand.

If you find yourself working in one of the hopefully very few shops that restricts the use of these features, you should actively encourage debate and attempt to change the policy. Or start looking for a new place to work. There’s no reason to suffer through this kind of idiocy just because some “senior developer” doesn’t want to learn a new and better way to do things.

If you’re one of the hopefully very few people attempting to enforce such a policy (and I say “attempting” because I’ve yet to see such a policy successfully enforced), you should re-examine your reasoning. I think you’ll find that the improvements in code quality and programmer effectiveness that result from the use of these features far outweigh the rare minor inconveniences you encounter.

I love this. It’s what? I hate that!

When we were kids, we spent a lot of our summer days at home, playing in the pool and jumping on the trampoline. And nearly every day, Mom would make sandwiches for our lunch, which she served outside on the patio picnic table. Those sandwiches were usually lunch meat: bologna, salami, or something similar, along with Miracle Whip and some lettuce, and maybe other stuff. The details are a little foggy now, 45 years later.

I do recall that at some point Mom began making the sandwiches with leaf spinach rather than lettuce. One day, after several days of eating these slightly modified sandwiches, my youngest sister, Melody, commented: “Mom, I really like this new lettuce!” That was a mistake.

You see, of the five of us, the three oldest (myself included) knew that the “new lettuce” was actually spinach. I’m not sure about Marie, who’s a year younger than I, but I know for certain that Melody, the youngest, had no idea that she had been eating spinach for the last few days. And of course my brother and I thought it was our duty to educate our sister. I’m not sure which one of us actually said, “That ‘new lettuce’ is actually leaf spinach.”

Melody looked up at us skeptically (we might have played some tricks on her before), and then looked at Marilyn (oldest sister) for confirmation. Marilyn had already done a face-palm, knowing what the reaction was going to be, and Melody took that as confirmation. She put her sandwich down and said, “Ewwww. I hate spinach!” She wouldn’t finish her sandwich and for weeks after that she’d carefully inspect whatever was put in front of her to ensure that Mom wasn’t trying to sneak something by. If she didn’t recognize it, she wouldn’t eat it.

Understand, Melody was maybe five or six years old at the time. So I guess I can cut her some slack.

Back in the late ’90s, a friend came to visit, and Debra and I took her to have sushi. Our friend liked a particular type of sushi roll, and was excited to be having it again. I don’t remember exactly which roll it was, but one of the things she really liked about it was the crunchy texture and the taste of the masago (capelin roe) that was on the outside of the roll. Since she liked sushi and was ecstatic about having that roll, I figured she knew what she was eating. So I said something about fish eggs.

Her response was worse than Melody’s: she put down the piece she was holding, spit out what was in her mouth, and then drank a whole glass of water to get rid of the taste. This was after she’d already eaten two pieces of the roll while enthusiastically telling us how much she liked it. But after she found out that masago is fish eggs, she wouldn’t touch another bite.

Since then, I’ve seen similar reactions from many other people. I call it the, “I love this. It’s what? I hate that!” reaction. I can almost understand it with food, because I’ve been in the position of being told what something was after I ate it, and I felt the internal turmoil of having eaten something that I probably wouldn’t have eaten had I known what it was beforehand. But I can’t at all understand that reaction when applied to other things. Politics, for example.

I’ve actually seen conversations that went something like this:

Person 1: “That’s a really good idea.”

Person 2: “Yeah, when President Obama proposed it, I ….”

Person 1: “Obama proposed it? What a stupid idea!”

And, of course, several years ago I saw similar conversations, but with “Bush” replacing “Obama.”

I would find it funny if it weren’t so common. It seems as though, when it comes to politics, a large fraction of the American public is more interested in who the ideas come from than if the ideas have any merit. We call that “tribalism.” It’s stupid in the extreme.

Inconceivable!

Every time I hear President Obama say something “will not be tolerated,” I’m reminded of Vizzini’s “inconceivable!”

The silly sanctions we’re placing on Russian officials will have approximately the same effect as the Bush administration’s pointless sanctions on North Korea back in 2006; banning the export of iPods, Segway scooters, and other luxury items so that Kim Jong-Il wouldn’t be able to give trinkets to his cronies.

But the administration has to bluster and posture and appear to be “doing something about the problem” even though the president and anybody with more brains than your average mailbox knows that we are completely powerless: Russia will do what Putin wants, openly and without fear of reprisal.

Why do I think we’re powerless? First, military confrontation is out of the question. Russia isn’t a little country with an insignificant military that we can overpower with a division or two of National Guard troops. Air strikes, even if we were stupid enough to try and could get past the Russian defenses, would be met with major reprisals and ultimately result in a real war that nobody wants and if fought would destroy the world economy.

Not that we could make any kind of military strike. President Obama isn’t dumb enough to try convincing us that Ukraine is worth another foreign excursion, and no member of Congress who is up for re-election this year is likely to approve such a thing even if the president were to suggest it.

Second, serious economic sanctions are out of the question because large parts of Europe depend on Russian natural gas for heat. Nobody in Europe wants to risk upsetting Putin, because he could easily turn off the spigot. The first cold snap come Fall would have every citizen in Europe demanding that The Powers That Be let Russia have whatever they want, so long as the gas is turned back on. Putin isn’t beyond punishing our European allies in response to American “provocation.”

It does make for good theater, though. Too bad we can’t bottle all the hot air from both sides. Between Obama’s rhetoric, Putin’s laughable denials, and Republicans’ excoriating the president for his lack of action, we could make a serious start to reducing the amount of fossil fuels burned to keep warm next Winter.