Understand, I don’t know anything about the internals of the software running . . . → Read More: Hacking the slot machine]]>
Understand, I don’t know anything about the internals of the software running on these machines, but I know enough about pseudorandom number generators, their use and misuse, to offer a plausible explanation of the vulnerability and how it’s exploited. I also know a few people in the industry. What I describe below is possible, and from my experience quite likely to have happened. Whether it’s exactly what happened, or if it’s even close to the mark, I have no way of knowing.
First, a little background.
In modern computer controlled slot machines (say, anything built in the last 30 years), the machine uses random numbers to determine the results of a spin. In concept, this is like rolling dice, but the number of possibilities is huge: on the order of about four billion. In theory, every one of those four billion outcomes is equally likely every time you roll the dice. That would be true in practice if the computer were using truely random numbers.
But computers are deterministic; they don’t do random. Instead, they use algorithms that simulate randomness. As a group, these algorithms are called pseudorandom number generators, or PRNGs. You can probably guess that PRNGs differ in how well they simulate true randomness. They also differ in ease of implementation, speed, and something called “period.” You see, a PRNG is just a mathematical way to define a deterministic, finite sequence. Given a starting state (called the seed), the PRNG will generate a finite set of values before it “wraps around” to beginning and starts generating the same sequence all over again. Period is the number of values generated before wrapping. If you know what PRNG is being used and you know the initial state (seed), then you know the sequence of numbers that will be generated.
The machines in question were purchased on the secondary market sometime before 2009. It’s probably safe to say that the machines were manufactured sometime between 1995 and 2005. During that era, the machines were almost certainly running 32 bit processors, and likely were generating 32 bit random numbers using PRNGs that maintained 32 bits of state. That means that there are 2^32 (four billion and change) possible starting states, each of which has a maximum period of 2^32. Overall, there are 2^64 possible states for the machine to be in. That’s a huge number, but it’s possible to compute and store every possible sequence so that, if somebody gave you a few dozen random numbers in sequence, you could find out where they came from and predict the next number. It’d take a few days and a few terabytes of disk storage to precompute all that, but you’d only have to do it once.
It’s likely that the PRNG used in these machines is a linear congruential generator which, although potentially good if implemented well, is easy to reverseengineer. That is, given a relatively small sequence of generated numbers, it’s possible to compute the seed value and predict the next values in the sequence. All this can be done without knowing exactly which LCG algorithm is being used.
The hackers did have the source code of the program (or they disassembled the ROM), but they didn’t have access to the raw numbers as they were generated. Instead, they had to deduce the random number based on the outcome of the spin. But again, that just takes a little computation (okay, more than just a little, but not too much) time to create a table that maps reel positions to the random sequence.
My understanding is that slot machines continually generate random numbers on a schedule, even when the machine is idle. Every few milliseconds, a new number is generated. If it’s not used, then it’s discarded and the next number is generated. So if you know where you are in the sequence at a given time, then you can predict the number that will be generated at any time in the future. Assuming, of course, that your clock is synchronized with the slot machine’s clock.
If you refer back to the article, you’ll see that the agent who was working the machine would record the results of several spins, then walk away and consult his phone for a while before coming back to play. That phone consultation was almost certainly uploading the recorded information to the central site, which would crunch the numbers to determine where the machine was in the random sequence. The system knows which numbers in the sequence correspond to high payouts, so it can tell the phone app when to expect them. The agent then goes back to the machine and watches his phone while hovering his finger over the spin button. When the phone says spin, he hits the button.
The system isn’t perfect. With perhaps up to 200 random numbers being generated every second, and human reaction time being somewhat variable, no player will hit the big payout every time. But he’s increased his odds tremendously. Imagine somebody throwing one gold coin into a bucket of a million other coins, and another gold coin into a bucket of 200 other coins. You’re given the choice to blindly choose from one of the two buckets. Which would you choose from?
That might all sound complicated, but it’s really pretty simple in concept. All they did was create a map of the possibilities and devise a way to locate themselves on the map. Once you know where you are on the map, then the rest is a simple matter of counting your steps. Creating the map and the location algorithm likely took some doing, but it’s very simple in concept.
The above explanation is overly broad, I’ll admit, and I wave my hand over a number of technical details, but people at work with whom I’ve discussed this generally agree that this is, at least in broad strokes, how the hackers did it. Understand, I work at a company that develops slot machine games for mobile devices, and several of the programmers here used to work for companies that make real slot machines. They know how these machines work.
When I originally saw the article, I assumed that some programmer had made a mistake in coding or using the PRNG. But after thinking about it more critically, I believe that these machines are representative of the state of the art in that era (19952005). I don’t think there was a design or implementation failure here. The only failure would be that of not imagining that in a few years it would be possible for somebody who didn’t have a supercomputer to compute the current machine state in a few minutes and exploit that knowledge. This isn’t a story about incompetence on the part of the game programmers, but rather a story about the cleverness of the crooks who pulled it off. I can admire the technical prowess it took to achieve the hack while still condemning the act itself and the people who perpetrated it.
]]>I suspect that the net effect of this order will be approximately zero, as far as business is concerned. In the first place, there are so many exceptions listed, it’s likely that any department head with a modicum of intelligence will get his proposals exempted from the new rules. And in the unlikely event that they do have to identify regulations to be repealed, they have a plethora of idiotic rules that are on the books and no longer enforced. It’ll take them years to clear that cruft from the books. So at best what we’ll see is large numbers of unenforced or unenforceable regulations being repealed. A Good Thing, no doubt, but not something that businesses will notice.
The Director of the Office of Management and Budget is tasked with specifying
“… processes for standardizing the measurement and estimation of regulatory costs; standards for determining what qualifies as new and offsetting regulations; standards for determining the costs of existing regulations that are considered for elimination; processes for accounting for costs in different fiscal years; methods to oversee the issuance of rules with costs offset by savings at different times or different agencies; and emergencies and other circumstances that might justify individual waivers of the requirements …”
It looks to me like the president’s order creates more regulations and more work, which probably will require increased expenses. I wonder if his order is subject to the new 1for2 rule.
I mentioned exceptions above. The order exempts:
A savvy department head can probably make a good argument that any new regulation fits one of the first two. Failing that, being “in” with the Director of OMB will likely get you a pass.
Also, Section 5 states that the order will not “impair or otherwise affect”
Oh, and the last part, Section 5(c) says:
“This order is not intended to, and does not, create any right or benefit, substantive or procedural, enforceable at law or in equity by any party against the United States, its departments, agencies, or entities, its officers, employees, or agents, or any other person.”
In other words, this isn’t Law, but rather the president’s instructions to his subordinates.
The dog can’t bite; it can hardly growl. But the president can say that he “did something about the problem,” and thus get marks for keeping a campaign pledge.
So much for draining the swamp. This is the way things have been done in Washington for decades. Make a big deal signing a regulation with a feelgood title, but that does nothing (or, worse, does exactly the opposite of what you would expect), bask in the praise of your supporters, and then go about business as usual.
Welcome to the cesspool, Mr. President.
]]>The idea behind compiler warnings was . . . → Read More: Tales from the trenches: It’s just a warning]]>
The idea behind compiler warnings was that they pointed out potential programming errors. For example, due to integer overflow, a conversion from unsigned integer to signed integer could be a problem if the unsigned int were large enough. A single compile might produce thousands of warnings, most of which weren’t actual problems. As a result, compiler warnings were treated like The Boy Who Cried Wolf.
But not all warnings are alike, and even some of those unsignedtosigned warnings really were important. One day I realized that the compiler had produced warnings for some of the bugs that I’d been encountering, and I began writing my code to avoid the warnings. Surprisingly enough, I began to see fewer bugs in my programs.
Some time after I’d had my epiphany, I got a contract to rescue a failing project. The program was “feature complete,” but incredibly unstable. There was a huge number of bugs, and the project was falling further behind schedule. The team had experienced a lot of turnover, including the original lead and several of the senior developers, and they couldn’t give a reliable estimate of when they’d have the program working.
The first day I was there, I got my development system set up and compiled the program. As expected, there were over 1,000 compiler warnings. I spent a few hours looking through the code in question and found a half dozen warnings that were pointing out real problems with the code, and proved that modifying the code to eliminate the warnings resolved some reported bugs. The next day I met with the development team.
If you ever wrote Windows programs back then, you can imagine the response to my proclamation: “The first thing we’re going to do is eliminate all of the compiler warnings.”
“But there’s more than 1,000 warnings!”
“That will take us forever!”
And, the one I was waiting for: “They’re just warnings. If it was important the compiler would have issued an error.”
At that point, I trotted out my half dozen examples of warnings that pointed to actual program bugs. That got everybody’s attention.
It took us a week to eliminate those thousandplus compiler warnings. In the process, we resolved dozens of reported bugs that were directly attributable to the warnings. By the end of the week the development team was in good spirits and the project manager thought I was a miracle worker.
The following Monday I met with the development team and said, “Now we’re going to Warning Level four.”
And I thought the previous week’s protests were bad! I had compiled the program on Level 4 and got over 2,000 warnings. Many of those truly are spurious: unreachable code, unused parameters and local variables, etc. But some of those warnings are important. For example, the compiler would issue a warning if the code used a variable before it was initialized. I had found a bug in the program that was caused by referencing an uninitialized pointer, but that warning only occurs on Level 4.
It took almost two weeks, and the development manager had to “counsel” one programmer who, rather than fixing the offending code, was instructing the compiler not to issue the warnings.
Not surprisingly, we resolved another few dozen bugs in the process and the code was a lot cleaner, too. A month after I started, we had a solid schedule to fix the remaining issues, and the product shipped six weeks later. The development team was happy and the manager thought I was a miracle worker. My contract had been for three months (13 weeks) but they let me go after 10 weeks, with an extra week’s pay as a bonus. To top things off, the manager set me up with a friend of his who also had a troubled project.
It was a good gig for a while, getting paid for making people pay attention to what their compilers were telling them. Good pay, easy work, and people thinking I was some kind of genius. It’s a weird world we live in.
The moral of the story, of course, is that you shouldn’t ignore compiler warnings. A more important lesson, which the C# team took to heart, is that compiler warnings are ambiguous. The C# language specification is much more rigorous than is the C language specification, and most things that were “just warnings” in C are definite errors in C#. For example, referencing an uninitialized variable is an error. There are fewer than 30 compiler warnings in the latest version (Visual Studio 2015) of the C# compiler. Contrast that to the hundreds of different warnings in a C or C++ compiler.
In my work, I compile with “Warnings as errors.” If the compiler thinks something is important enough to warn me about it, then I’m going to fix the code so that the compiler is happy with it. It’s usually an easy fix. I almost never have to disable the warning altogether.
]]>If you want to sort a list of a userdefined type in C# (and other languages), you need a comparison method. There are . . . → Read More: Subtraction is not the same as comparison]]>
If you want to sort a list of a userdefined type in C# (and other languages), you need a comparison method. There are several different ways to do that, perhaps the simplest being to create a comparison delegate. For example, if you have this class definition:
public class Foo
{
public int Id {get; private set;}
public string Name {get; set;}
// constructor and other stuff here . . .
}
And a list of those:
List<Foo> MyList = GetFooThings(...);
Now, imagine you want to sort the list in place byId
. You can write:
MyList.Sort((x,y) => { return x.Id.CompareTo(y.Id); });
CompareTo
will return a value that is:
x.Id < y.Id
x.Id = y.Id
x.Id > y.Id
The code I ran into was similar, except the programmer thought he’d “optimize” the function. Rather than incurring the overhead of calling Int32.CompareTo
, he took a shortcut:
MyList.Sort((x,y)) => { return x.Id  y.Id; });
His thinking here is that the result of subtracting the second argument from the first will contain the correct value. And that’s true most of the time. It’s not at all surprising that his testing passed. After all, 12  3
returns a positive number, 300  123456
returns a negative number, and 99  99
returns 0.
But he forgot about negative numbers and integer overflow. Subtracting a negative number is like adding a positive number. That is, 4  (1) = 5
. And in the artificial world of computers, there are limits to what we can express. The largest 32bit signed integer we can express is Int32.MaxValue
, or 2,147,483,647. If you add 1 to that number, the result is Int.MinValue
, or 2,147,483,648. So the result returned by the comparison delegate above, when x
is a very large number and y
is a sufficiently small negative number is incorrect. It will report, for example, that 2,147,483,647 is smaller than 1!
There’s a reason that Int32.CompareTo
is written the way it is. Here it is, directly from Microsoft’s Reference Source:
public int CompareTo(int value) {
// Need to use compare because subtraction will wrap
// to positive for very large neg numbers, etc.
if (m_value < value) return 1;
if (m_value > value) return 1;
return 0;
}
It seems like I run into these fundamental errors more often now than I did in the past. I’m yet sure of the reason. I’ve been active in the online programming community for 30 years, and have seen a lot of code written by programmers of all skill levels. Early on, the average skill level of the programmers I met online was much higher than it is today. These days, every Java programming student has access to Stack Overflow, and I see a lot of these rookie mistakes. But I also see many such mistakes made by more experienced programmers.
I’m wondering if the difference is one of education. When I started programming, we learned assembly language early on and actually used it to write real programs. Two’s complement integer representation was a lesson we learned early, and integer overflow was something we dealt with on a daily basis. We would not have made this mistake. No, we made much more interesting mistakes.
This error can be caught, by the way. If you enable runtime overflow checking in C#, the runtime will throw an exception if the result of an arithmetic operation results in overflow or underflow. But that check is almost always turned off because it can have a large negative impact on performance, and in many places we depend on integer overflow for our algorithms to work correctly. It’s possible to disable overflow checking for a specific expression or block of code, but in my experience that functionality is rarely used. You can also disable the check globally but enable it for specific expressions or blocks of code, but doing so requires that you understand the issue. And if a programmer understood the issue, he wouldn’t have written the erroneous code in the first place.
Errors like this can exist in a working program for years, and then crop up at the most inopportune time when somebody uses the code in a new way or presents data that the original programmer didn’t envision. As much as our languages and tools have evolved over the past 30 years, in some ways we’re still distressingly close to the metal.
]]>That’s a good conceptual model, but implementing that model can be unwieldy, and can consume a huge amount of memory. When working with small . . . → Read More: Pairing heap representation]]>
class HeapNode
public int Data;
public HeapNode[] Children;
}
That’s a good conceptual model, but implementing that model can be unwieldy, and can consume a huge amount of memory. When working with small objects in managed languages like .NET, memory allocation overhead can easily exceed the memory used for data. That’s especially true of arrays, which have a 56byte allocation overhead.
Granted, not all nodes in a pairing heap have children. In fact, at most half of them will. So we can save memory by not allocating an array for the children if there aren’t any. But that adds some complexity to the code, and at best saves only half of the allocation overhead.
Using the .NET List<T>
collection doesn’t help, because List<T>
uses an array as the backing store. LinkedList<T>
will work, but involves its own overhead what with having to manage LinkedListNode
instances.
In short, managing a pernode list of children can be difficult.
In my introduction to the Pairing heap, I showed this figure:
2

8, 7, 3
 
9 4, 5

6
2 is the root of the tree. It has child nodes 8, 7, and 3. 9 is a child of 8. 4 and 5 are children of node 3, and 6 is a child of 4.
More traditionally, that tree would be drawn as:
2
/  \
8 7 3
/ / \
9 4 5
/
6
That’s the traditional view of the tree. But we can also say that 8 is the first child of 2, 7 is the sibling of 8, and 3 is the sibling of 7. That is, every node has a reference to its first child, and to its next sibling. The node structure, then, is:
class HeapNode
public int Data;
public HeapNode FirstChild;
public HeapNode Sibling;
}
As it turns out, there’s a well known binary tree structure called Leftchildrightsibling. Any traditional tree structure can be represented as such a binary tree. Our tree above, when represented as a leftchildrightsibling binary tree, becomes:
2
/
8
/ \
9 7
\
3
/
4
/ \
6 5
You might notice that this structure bears striking similarity to the original figure from my introduction. It’s the same thing, only rotated 45 degrees clockwise.
As you can see, this builds a very unbalanced binary tree. That’s okay, since we’re not searching it. With pairing heap, all of the action is at the first few tree levels of the tree. A deep tree is good, because it means that we rarely have many siblings to examine when deleting the smallest node.
Implementing a Pairing heap in a leftchildrightsibling binary tree is incredibly easy. Next time I’ll show a simple implementation in C#.
]]>This . . . → Read More: But that’s the way we’ve always done it!]]>
This bothers me, not because the code wastes an insignificant amount of memory, but because it leads to a lot of confusion among students and junior programmers who are trying to implement a binary heap. Off by one errors abound, the first being in allocating the heap array. Here’s a common error that occurs when allocating a heap to hold 100 integers.
int[] heap = new int[100];
We’re conditioned from the very start of our programming education to begin counting at 0. Every time we have stuff in an array or list, the first thing is at index 0. Every arraybased algorithm we work with reinforces this lesson. We’re taught that some languages used to start at 1, but those heretical languages have nearly been eliminated from the world. And then they foist this folly on us: a heap’s root is at index 1.
We’re taught that 100 items means indexes 0 through 99. It’s beat into our heads on a daily basis when we’re learning programming. Then they trot out this one exception, where we have to allocate an extra item and count from 1 to 100 rather than 0 to 99 like normal programmers do.
Some people say, “but if you start at 0, then the calculations to find the children won’t work.” Well, they’re partially right. If the root is at index 1, then the left child of the node at index x
is at index (x * 2)
, and the right child is at index (x * 2) + 1
. The parent of the node at index x
is at index x/2
. They’re right in that those calculations don’t work if you move the root to index 0. But the changes required to make the calculations work are trivial.
If the root is at index 0, then the left child is at (2 * x) + 1
, right child at (2 * x) + 2
, and parent at (x1)/2
.
The hard core optimizer will tell you that the extra addition in computing the left child, and the extra subtraction when computing the parent will incur a big performance hit. In truth, in the context of a real, working, computer program, the performance difference is down in the noise. It’s highly unlikely that your program’s overall performance will suffer from the addition of a couple of ADD or SUB instructions in a binary heap implementation. If it does, then you’re doing something horribly wrong. Doing something stupid in order to save a few nanoseconds here and there is … well … stupid.
No, I think the real reason we continue this madness is historical. The first known reference to a binary heap is in J.W.J. Williams’ implementation of Heapsort (Williams, J.W.J. (1964), ‘Algorithm 232: Heapsort’, Communications of the ACM 7 (6), 347348). Yes, that’s right: 52 years ago. His code sample, like all ACM code samples at the time, was written in Algol, a language in which array indexes start at 1.
Textbooks with code samples in Algol and, later, Pascal, perpetuated the idea that the root of a binary heap must be at index 1. It made sense, what with 1 being the first index in the array. When those books were revised in the 1990s and the examples presented in C, Java, C++, etc., a literal conversion of the code maintained the 1based root even though arrays in those languages are 0based. Nobody stopped to consider how confusing that empty first element can be to the uninitiated.
What I find disturbing is that whoever did the code conversions almost certainly ran into an off by one problem when testing the examples. But rather than spend time to rewrite the code, they “fixed” the problem by allocating an extra item, and maybe explained it away in the text as something that just has to be done to keep the root at index 1. Those few who actually understood the issue seem to have been too lazy to make the correction, opting instead to explain it away as a performance issue.
In so doing, they’ve done their audiences a grave disservice. I find that inexcusable.
]]>You work in a warehouse that has N storage compartments, labeled D1 through DN. Each storage compartment can hold one shipping crate. There’s also a temporary holding area, called DT, that can hold a single crate. There also are N . . . → Read More: An interesting interview question]]>
You work in a warehouse that has N storage compartments, labeled D1 through DN. Each storage compartment can hold one shipping crate. There’s also a temporary holding area, called DT, that can hold a single crate. There also are N crates, labeled C1 through CN, each of which is in a storage compartment.
There is a forklift that can pick up a single crate, move it to an empty position, and put it down.
The owner, after touring the facility, decreed that the crates must be rearranged so that crate Cx is in space Dx (C1 in D1, C2 in D2, … CN in DN).
What algorithm would you employ to rearrange the crates in the minimum number of moves?
This is a fairly simple problem. If you pose it as a “balls and bins” problem, with a physical model to play with, most people will develop the optimum algorithm in a matter of minutes. But pose it as a programming problem and a lot of programmers have trouble coming up with the same algorithm.
If you’re interested in trying the problem yourself, cut five squares (crates) from a piece of paper, and label them C1 through C5. Draw six boxes on another sheet of paper, and label them D1 through D6. Now, arrange the crates on the spaces in this order: C4, C1, C5, C3, C2. Leave D6 empty. Now, rearrange them. Remember, if you pick up a card you have to place it in an empty spot before you can pick up another card.
So if this problem is so simple, why is it such an interesting interview question and why do programmers have trouble with it?
One reason is that it’s not really a sorting problem. At least not in the traditional sense. You see, I’m not asking the candidate to write a sort. In fact, I’m not asking him to write a program at all. I’m asking him to develop an algorithm that will rearrange the crates in a specific order, given the constraints. The candidate’s first reaction tells me a lot about how he approaches a problem. Granted, I have to take into account that the candidate will expect a whiteboard programming problem, but if his immediate reaction is to stand up and start writing some kind of sort method, I can pretty much guarantee that he’s going down the wrong track.
The algorithm that uses the fewest moves is not one that you would typically write. It’s either computationally expensive, or it uses additional extra memory. And there’s a well known efficient solution to sorting sequential items. That algorithm works well in the computer, but is less than optimum when translated to the physical world where it takes time for a forklift to pick up an item and move it.
The well known algorithm starts at the first storage compartment, D1. If the item in that position is not crate C1, it swaps the crate with the crate in the position that it needs to be in. It continues making swaps until C1 is in D1, and then moves on to compartment D2 and does the same thing. So, for example, if you have five crates that are arranged in the order [4,1,5,3,2,x]
(the x
is for the empty spot, initially DT) the sequence of swaps is:
Swap 4 with 3 (the item that's in position D4), giving [3,1,5,4,2,x]
Swap 3 with 5, giving [5,1,3,4,2,x]
Swap 5 with 2, giving [2,1,3,4,5,x]
Swap 2 with 1, giving [1,2,3,4,5,x]
At which point the programmer says, “Done!”
The programmer who develops this solution and calls it done might think again if he rereads the problem statement. Note that nowhere did the algorithm use the temporary location, DT. Either the programmer will smell a rat, or he’ll dismiss that temporary location as a red herring.
As it turns out, it’s not a red herring. A swap is actually a sequence of three separate operations. Swapping the contents of D1 and D4, for example, results in:
The well known solution to the problem–the solution that is in many ways optimum on the computer–results in 12 individual moves:
In the worst case, of which this is an example, the algorithm makes N1 swaps, or 3N3 moves.
At this point, it might be useful to have a discussion about what problem we’re really trying to solve. The problem asked for the minimum number of moves. A reminder that we’re talking about a physical process might be in order. To programmers who are used to things happening almost instantly, a few extra moves here and there don’t really make a difference. The solution presented above asymptotically optimum in that the number of moves required is linearly proportional to the number of crates. That’s typically a good enough answer for a programming question. But, again, this isn’t really a programming question. One can assume that the client wants to make the minimum number of moves because he has to pay a forklift operator $15 per hour and it takes an average of five minutes for every move. So if he can reduce the number of moves from 12 to 6, he saves $30.
The solution that most people presented with the balls and bins problem develop does things a bit differently. Rather than starting by moving crate C4 to its rightful place, the algorithm starts by emptying the first bin (i.e. moving C4 to DT), and then filling the empty location with the item that belongs there. Then, it fills the new empty slot with the element that belongs there, etc. Let’s see how it works, again starting with [4,1,5,3,2,x]
.
That took six moves, or half of what the “optimum” solution took. That’s not quite a fair comparison, though, because that’s not the worst case for this algorithm. The worst case occurs when moves result in a cycle that leaves DT empty before all of the items are in order. Consider the sequence [2,1,5,3,4,x]
. Using our new algorithm, we start with:
It’s interesting to note that the swapping algorithm requires three swaps (nine moves) to rearrange items given this initial arrangement.
As far as I’ve been able to determine, the worst case for this algorithm requires 3N/2 moves. It will never make more moves than the swapping algorithm, and often makes fewer.
The point of posing the problem to the candidate is to get him to develop an approach to the problem before he starts coding.
Another interesting thing about this question is that it allows for a followon question. Assuming that the candidate develops the optimum algorithm, ask him to write code that, given the starting position, outputs the moves required to rearrange the crates. In other words, implement the algorithm in code.
At this point, he’ll have to make a decision: use sequential search to locate specific crates, or create an index of some kind to hold the original layout so he can quickly look up an individual crate’s location. Again, he should be asking questions because the requirements are deliberately ambiguous. In particular, he might think that he can’t use the index because we only allowed him one extra swapping space for the crates. But again, the original problem statement doesn’t mention a computer program, and the followon question doesn’t place any restrictions on the memory consumption.
I’ll leave implementation of the algorithm as an exercise for those who are interested. It’s not difficult, just a bit odd for those who are used to writing traditional sorting algorithms.
Ultimately, the reason I find this problem a potentially useful interview question is because it forces the candidate to differentiate between the customer’s requirements (the optimum sequence of steps required to arrange his crates), and then internal implementation details. The extent to which the candidate can make that differentiation tells me a lot about how she will approach the realworld problems that we’ll present to her on a daily basis. In addition, the simple problem is rife with opportunities to have other discussions about approaches to problems and common coding tradeoffs.
It occurred to me as I was writing this that I should construct a balls and bins prop that I can trot out if the candidate doesn’t trip to the algorithm relatively quickly. It would be interesting to see how quickly the candidate, if given the prop, would discover the optimum algorithm.
I’m kind of excited about getting an opportunity to try this out.
]]>There is a method that takes an existing object and updates it based on fields in another object. The code writes a change log entry for each field updated. The code is similar to this:
private bool UpdatePerson(Person person, Models.Person personUpdate)
{
var changes = new List<ChangeLogEntry>();
changes.Add(CompareValues(person.FirstName, personUpdate.FirstName, "First Name"));
changes.Add(CompareValues(person.LastName, personUpdate.LastName, "Last Name"));
// ... other fields here
changes = changes.Where(x => x != null).ToList(); //
This is Just Wrong. It works, but it’s wonky. The code is adding null
values to a list, knowing that later they’ll have to be removed. Wouldn’t it make more sense not to insert the null
values in the first place?
One way to fix the code would be to write a bunch of conditional statements:
ChangeLogEntry temp;
temp = CompareValues(person.FirstName, personUpdate.FirstName, "First Name");
if (temp != null) changes.Add(temp);
temp = CompareValues(person.LastName, personUpdate.LastName, "Last Name");
if (temp != null) changes.Add(temp);
// etc., etc.
That’s kind of ugly, but at least it doesn’t write null
values to the list.
The real problem is the CompareValues
function:
private ChangeLogEntry CompareValues(string oldValue, string newValue, string columnName)
{
if (oldValue != newValue)
{
var change = new ChangeLogEntry(columnName, oldValue, newValue);
return change;
}
return null;
}
Yes, it’s poorly named. The method compares the values and, if the values are different, returns a ChangeLogEntry
instance. If the values are identical, it returns null
. Perhaps that’s the worst thing of all: returning null
to indicate that the items are the same. This model of returning null
to indicate status is a really bad idea. It forces clients to do wonky things. Of all the horrible practices enshrined in 40+ years of Cstyle coding, this one is probably the worst. I’ve seen and had to fix (and, yes, written) too much terrible oldstyle C code to ever advocate this design pattern. null
is evil. Avoid it when you can.
And you usually can. In this case it’s easy enough. First, let’s eliminate that CompareValues
function:
if (person.FirstName != personUpdate.FirstName) changes.Add(new ChangeLogEntry(person.FirstName, personUpdate.FirstName, "First Name"));
if (person.LastName != personUpdate.LastName) changes.Add(new ChangeLogEntry(person.LastName, personUpdate.LastName, "Last Name"));
That’s still kind of ugly, but it eliminates the null
values altogether. And we can create an anonymous method that does the comparison and update, thereby removing duplicated code. The result is:
private void UpdatePerson(Person person, Models.Person personUpdate)
{
var changes = new List<ChangeLogEntry>();
// Anonymous method logs changes.
var logIfChanged = new Action<string, string, string>((oldValue, newValue, columnName) =>
{
if (oldValue != newValue)
{
changes.Add(new ChangeLogEntry(columnName, oldValue, newValue));
}
}
logIfChanged(person.FirstName, personUpdate.FirstName, "First Name");
logIfChanged(person.LastName, personUpdate.LastName, "Last Name");
// ... other fields here
if (changes.Any())
{
// update record and write change log
}
}
That looks a lot cleaner to me because there are no null
values anywhere, and no need to clean trash out of the list.
Here’s the kicker: I didn’t rewrite the code. If the code were as simple as what I presented here, I probably would have changed it. But the code is a bit more complex and it’s currently running in a production environment. I hesitate to change working code, even when its ugliness causes me great mental pain. I’m almost certain that I can make the required changes without negatively affecting the system, but I’ve been burned by that “almost” too many times with this code base. If I had a unit test for this code, I’d change it and depend on the test to tell me if I screwed up. But there’s no unit test for this code because … well, it doesn’t matter why. Let’s just say that this project I inherited has a number of “issues.” The one I’ll relate next time will boggle your mind.
]]>As I mentioned in the discussion of the binary heap, inserting an item takes time proportional to the base2 logarithm of the heap size. If there are 100 items in the heap, inserting a new item will take, worst case, O(log_{2}(100)) operations. Or, about 7 swaps. It could be fewer, but it won’t be more. In addition, removing the minimum item in a minheap will require O(log_{2}(n)) operations. In a binary heap, insertion and removal are O(log n).
As you saw in my introduction to the Pairing heap, inserting an item is an O(1) operation. It never involves more than a comparison and adding an item to a list. Regardless of how many items are already in the heap, adding a new item will take the same amount of time.
Removal, though, is a different story altogether. Removing the smallest item from a Pairing heap can take a very long time, or almost no time at all.
Consider the Pairing heap from the previous example, after we’ve inserted all 10 items:
0

6,4,3,5,8,9,7,2,1
It should be obvious that removing the smallest item (0) and readjusting the heap will take O(n) operations. After all, we have to look at every item during the pairandcombine pass, before ending up with:
1  2, 8, 3, 4     7 9 5 6
But we only look at n/2 items the next time we remove the smallest item. That is, we only have to look at 2, 8, 3, and 4 during the pairandcombine pass. The number of items we look at during successive removals is cut in half with each removal, until we get to two or three items per removal. Things fluctuate a bit, and it’s interesting to write code that displays the heap’s structure after every removal.
By analyzing the mix of operations required in a very large number of different scenarios, researchers have determined that the amortized complexity of removal in a Pairing heap is O(log n). So, asymptotically, removal from a Pairing heap is the same complexity as removal from a binary heap.
It’s rather difficult to get a thorough analysis of the Pairing heap’s performance characteristics, and such a thing is way beyond the scope of this article. If you’re interested, I suggest you start with Towards a Final Analysis of Pairing Heaps.
All other things being equal, Pairing heap should be faster than binary heap, simply because Pairing heap is O(1) on insert, and binary heap is O(log n) on insert. Both are O(log n) on removal. Keep in mind, though, that an asymptotically faster algorithm isn’t always faster in the real world. I made that point some time back in my post, When theory meets practice. On average, Pairing heap does fewer operations than does binary heap when inserting an item, but Pairing heap’s individual operations are somewhat more complex than those performed by binary heap.
You also have to take into account that Pairing heap can do some things that binary heap can’t do, or can’t do well. For example, the merge (or meld) function in Pairing heap is an O(1) operation. It’s simply a matter of inserting the root node of one heap into another. For example, consider these two pairing heaps:
1 0   2, 8 5  4
To merge the two heaps, we treat the root nodes just as we would two siblings after removing the smallest item. We pair them to create:
0  1, 5  2, 8  4
With binary heap, we’d have to allocate a new array that’s the size of both heaps combined, copy all items from both heaps to it, and then call the Heapify
method to rebuild the heap. That takes time proportional to the combined size of both heaps.
And, as I mentioned previously, although changing the priority of an item in a binary heap isn’t expensive (it’s an O(log n) operation), finding the node to change is an O(n) operation. With Pairing heap, it’s possible to maintain a node reference, and it’s been shown (see the article linked above) that changing the priority of a node in a Pairing heap is O(log log n).
Another difference between Pairing heap and binary heap is the way they consume memory. A binary heap is typically implemented in a single contiguous array. So if you want a heap with two billion integers in it, you have to allocate a single array that’s 8 gigabytes in size. In a Pairing heap, each node is an individual allocation, and each node contains two pointers (object references). A Pairing heap of n nodes will occupy more total memory than a binary heap of the same size, but the memory doesn’t have to be contiguous. As a result, you might be able to create a larger pairing heap than you can a binary heap. In .NET, for example, you can’t create an array that has more than 2^31 (two billion and change) entries. A Pairing heap’s size, however, is limited only by the available memory.
]]>I like to call it creative laziness.
Binary heaps are useful and quite efficient for many applications. For a pure priority queue, it’s hard to beat the combination of simplicity and efficiency of a binary heap. It’s easy to understand, easy to implement, and performs quite well in most situations. But the simplicity of the binary heap makes it somewhat inflexible, and also can prove to be inefficient when heaps become very large.
Even though the binary heap isn’t sorted, its ordering rules are pretty strict. If you recall, a valid binary heap must satisfy two properties:
Maintaining the shape property is required if you want to efficiently implement the binary heap in a list or array, because it lets you implicitly determine the locations of the child or parent nodes. But nothing says that you have to implement a binary heap in an array. You could use a more traditional tree representation, with explicit nodes that have right, left, and parent node references. That has some real benefits.
For example, a common extension to the priority queue is the ability to change the priority of an item in the queue. It’s possible to do that with a binary heap, and in fact isn’t difficult. It’s inefficient, though, because in a binary heap it takes a sequential search to find the item before you can change its priority.
But if you have a traditional tree representation, then you can maintain direct references to the nodes in the heap. At that point, there’s no longer a need for the sequential search to find the node, and changing an item’s priority becomes a pure O(log n) operation. Algorithms such as the A* search algorithm typically use something other than a binary heap for their priority queues, specifically because changing an item’s priority is a common operation.
Another common heap operation is merging (also known as melding): combining two heaps into one. If you have two binary heaps, merging them takes time proportional to the combined size of both heaps. That is, it takes O(n + m) time. The basic idea is to allocate an array that’s the size of both heaps, put all the items into it, and then call a BuildHeap
function to construct the heap. With a different heap representation, you can do the merge in constant time.
It turns out that maintaining the shape property when you go to a more traditional tree representation is needlessly expensive. If you eliminate the need for the shape property, it becomes possible to indulge in even more creative laziness.
And, my oh my, have people been creative. Here’s a partial list of heap implementations.
There’s a lot of discussion about which of those is the “fastest” or “best” heap implementation. A lot of it involves theoretically optimum asymptotic behavior, and can get pretty confusing for those of us who just want a working heap data structure that performs well in our applications. Without going into too much detail, I’ll just say that some of those heap implementations that have theoretically optimum performance in some areas (the Brodal queue, for example) are difficult to implement and not at all efficient in realworld situations.
Of those that I’ve worked with, I particularly like Pairing heap because of its simplicity.
The pairing heap’s shape is much different than that of a binary heap. Rather than having right and left child references, a pairing heap node maintains child lists. It maintains the heap property in that any child in a minheap is greater than or equal to its parent, but the concept of shape is thrown out the window. For example, consider this binary heap of the numbers from 0 to 9.
0
1 3
5 2 7 4
9 6 8
Again, in the binary heap, each node has up to two children, and the children are larger than the parents. Although there are multiple valid arrangements of values possible for a binary heap, they all maintain that shape property.
A Pairing heap’s structure is much more flexible. The smallest node is always the root, and children are always larger than their parents, but that’s where the similarity ends. The best way to describe the structure is to go through an exercise of adding and removing items.
If you start with an empty heap and add an item, then that item becomes the root. So, let’s add 6 as the first item in the heap:
6
Now, we add the value 3. The rules for adding are:
In this case, 3 is smaller than 6, so 3 becomes the root and 6 is added as a child of 3.
3

6
Here, the vertical bar character () is used to denote a parentchild relationship. So node 6 is a child of node 3.
Now we add 4 to the heap. Since 4 is larger than 3, 4 is added to the child list.
3

6,4
When we add 2 to the heap, 2 becomes the new root, and 3 is appended to the child list:
2

6,4,3
I’ll add 5, 8, 9, and 7 in that order, giving:
2

6,4,3,5,8,9,7
Adding 0 makes 0 the root, and then adding 1 appends to the child list:
0

6,4,3,5,8,9,7,2,1
Note that the heap property is maintained. The smallest item is at the root, and child items and their siblings are larger than their parents. Because the smallest item is at the root, we can still execute the getminimum function in constant time.
And in fact, insert is also a constant time operation. Inserting an item consists of a single comparison, and appending an item to a list.
Things get interesting when we want to remove the smallest item. Removing the smallest item is no trouble at all, of course. It’s figuring out the new root that gets interesting. This proceeds in two steps:
First, process the children in pairs from left to right, creating 2node heaps with the lowest item at the root, and the other item as a child. So using the child list above, we would create:
4 3 8 2 1     6 5 9 7
So 6 is a child of 4, 5 is a child of 3, etc. In this case, 1 doesn’t have a child.
Then, we process from right to left, taking the lower item as the root and adding the other item as a child of the root. Starting with 1 and 2, we combine them to create:
4 3 8 1     6 5 9 2  7
Proceeding to compare 1 and 8:
4 3 1    6 5 2, 8   7 9
Note here that 2 and 8 are siblings, but 7 is a child of 2, and 9 is a child of 8. 9 and 7 are not siblings.
When we’re done processing, we have:
1  2, 8, 3, 4     7 9 5 6
Again, the heap property is maintained.
It’s very important to understand that the heap property is maintained. All child nodes are larger than their parent nodes. Siblings (children of the same node) are represented in an unordered list, which is just fine as long as each sibling is larger than the parent node. Also, a node’s children only have to be larger than their parent node. Being smaller than a parent’s sibling does not violate the heap property. In the figure above, for example, 5 is larger than its parent, 3, but smaller than its parent’s sibling, 8.
Removing 1 from the heap leaves us with
2, 8, 3, 4     7 9 5 6
Again, we pair from left to right. 2 is smaller than 8, so 2 becomes the root of the subtree. But what happens to the children? They’re appended to the child list of the other node. So pairing 2 and 8 results in:
2  8, 7  9
And pairing the others gives:
2 3   8, 7 4, 5   9 6
And combining from right to left makes 2 the root of the new tree:
2  8, 7, 3   9 4, 5  6
Again, you can see that the heap property is maintained.
Now, when we remove 2 from the heap, we first pair 8 and 7 to create:
7 3   8 4, 5   9 6
And then combine to form:
3  4, 5, 7   6 8  9
When 3 was selected as the smallest item, node 7 was appended to the child list.
That’s the basics of the Pairing heap. You might be asking yourself how that can possibly be faster than a binary heap. That will be the topic for next time.
]]>