Over the past few days, I’ve been playing with a script I built that attempts to build electoral vote maps using a genetic algorithm. Genetic algorithms are a computational method used to efficiently find solutions to complex problems. They do this by measuring how well a set of attempts perform against a quantified goal. Those that perform better are combined to create a new “generation”. This causes the population to “evolve” to better achieve the goal. The process is repeated for a set number of generations or until there is no longer significant improvement from one generation to the next.
Being a political hack, I wanted to apply this technique to the ongoing presidential election in some way. I settled on building a script that works toward finding which states Hillary Clinton needs to win in order to end up with a given number of electoral votes. The script, however, can’t directly change whether Clinton wins or loses a state. It can only change the result in a state by changing the popular vote. There isn’t any predictive power in this, it is simply a thought experiment focusing on how states can be won/lost to achieve a specific total.
Even though a candidate needs to win more than 50% of the popular vote to win a state’s electoral votes, not all changes (good or bad) will have an immediate effect on the electoral vote total. I have previously built successful models that manipulated electoral votes directly, now I wanted to see how a simple genetic algorithm would deal with delayed cause and effect.
In the end, I did get it to work. Because the method I used doesn’t have any “memory”, it was unable to consider whether a change that had no immediate impact was good or bad. Instead, the strongest maps tended to be those that kept vote totals close enough that even a small change to a candidate’s vote share would result in a change to the electoral college totals.
From what I can tell, these results don’t have much practical use. While elections do build on each other and themselves, they don’t “evolve” in the same way genetic algorithms assume. Additionally, if I really wanted to see a map that gave Hillary Clinton some number of electoral votes, it would have been faster to just mess around with an interactive map on a site like 270toWin.com. As long as I was getting some hand-on experience with genetic algorithms and data visualization, it was time well-spent. Hopefully, some of you find it interesting.
All of the raw data used came from the U.S. Census Bureau. The primary dataset used in the script is projected voter turnout for the 2016 presidential election. I generated this dataset by averaging each state’s turnout in the 2008 and 2012 presidential elections, and then using that average to forecast 2016’s turnout.
I’m not a computational scientist, so my approach to the actual algorithm was pretty basic. The goal is for the algorithm to converge on a map, or series of maps, that have Clinton winning x electoral votes depending on a user’s input.
The electoral map I used has 56 regions that are all worth at least 1 electoral vote; 50 states, Washington D.C., and five congressional districts in Nebraska and Maine. Electoral votes in those two states are allocated by congressional district, rather than the “winner take all” method used by others.
The script generates an initial population of 100 electoral maps in which Clinton is assigned a random vote share in every state. If Clinton is assigned more than 50% of the vote in a state or region, she will win those electoral votes. A map’s “fitness” is determined by the difference between the user’s target and Clinton’s total electoral votes ; a map is more “fit” if the difference is closer to 0.
To create the next generation of maps, the script pulls three random maps from the population. The two fittest maps are combined by averaging Clinton’s vote share in each state. As the state’s are being combined, there is a 10% chance of “mutation” in which Clinton’s final vote share in a given state is randomly shifted up or down by up to 2.5%. This means that out of 5,600 merges (56 regions * 100 maps), we should expect ~560 mutations of varying degrees. Some will be consequential, other’s won’t be. Maps are selected and merged until a new generation of 100 maps has been created. Both 10% and 2.5% were arbitrarily picked.
This whole process is repeated until the script is stopped. Ideally the script would stop automatically when the model converges, but I haven’t implemented that yet. From my own observations, the model typically appears to converge between 100 and 500 generations.
Considering that this is a quick and dirty genetic algorithm that ultimately works toward a useless objective, I was really happy with the results. The most obvious indicator of success is simply whether or not the model converged on the target number of electoral votes (if it converged at all). To test the model, I gave the algorithm the goal of finding a map in which Hillary Clinton and Donald Trump split the 538 votes of the electoral college 269/269. A candidate needs 270 electoral votes to win, so a 269 vote result is a highly unlikely scenario where there is no clear winner on Election Night. I ran the model for 500 generations.
Interestingly, in most test runs, at least one of the initial random maps was either within a few votes of the target or actually hits it. The presence of strong contenders in the first generation may be why the model converges so quickly. Simply having one correct map wasn’t good enough for me, though. I wanted the model to assert that a particular map was correct by eliminating almost every other variation. This introduces the idea of speciation.
A map’s “species” can be identified by how many electoral votes Clinton gets. A total number of votes can often be achieved several different ways, so maps of the same species might look different.
At the start of my 500 generation run, the population had over 70 species — or 70 different vote totals for Clinton. In contrast, the last ten generations had an average of 4 species. As long as the most prevalent species in that group of 4 had 269 electoral votes, the model would be considered successful. The chart above demonstrates how the diversity of the population dropped over the 500 generations. The chart below shows the growth and dominance of 269 vote maps over the same period.
A good indicator of convergence may be when one species has a high prevalence at the same point the total population has low diversity. If this were the case, the algorithm would have stopped somewhere around 100 to 200 generations on this run.
To illustrate how the maps evolved, I randomly pulled a map from the dominant species during the first 60 generations. The evolution of the electoral vote map is shown here:
Because early maps weren’t very fit the population was very unstable; a new map dominates the population every generation for the first 12. The population then slowly becomes more stable until about generation 40. After generation 40, the dominant map rarely changed for the next 460 generations. Referring back to the two earlier graphs, this was the same point at which the diversity of the population collapsed, and “269” took over.
So, it worked! But there was one more piece — I still hadn’t figured out how the algorithm coped with the fact that I only had given it indirect control over achieving its goal. To take a look, I pulled maps of the popular vote and the answer was pretty straight forward. Basically, the strongest maps ended up being those with small vote spreads in each state. These close state-level results allowed the small mutations to have an immediate effect on the total electoral vote count.
As I covered earlier, the method I used to build the genetic algorithm doesn’t have a “memory”. That is, it has no mechanism for determining if a given change will be beneficial in the long-run. Maps that were able to immediately capitalize on small mutations in key states performed better, those that were able to immediately capitalize on small mutations in any state did the best. Therefore, the strongest maps were those in which every state’s margin of victory was within a few percentage points.
In fact, the moderate mutation range of +/- 2.5% seems to be the key to this method working. For comparison, I expanded the possible mutation to be +/- 15%, and got much slower/messier results. It took more than 400 generations for the population’s diversity to collapse, and it took about 350 generations for the 269 vote maps to become the prevalent species. I also ran the script with no mutation range (0%). As expected, diversity collapsed almost immediately and species 269 had dominated the population by generation 15.
There are pitfalls to both of these extreme scenarios. On the high-end, the model may never fully converge because there is too much noise. On the low-end, the model may move too quickly and converge on an answer that is “good enough” but not correct. A little noise seems necessary to introduce new, possibly better maps, into the population. Coincidentally, a 2.5% mutation range seemed to be a happy medium. If I ever do think of a practical use for this, I’ll have plenty of material to consider.
Graphs demonstrating these two “extreme” scenarios are included below.