Gentleman Johnny

On June 13th, 1777, “Gentleman Johnny” Burgoyne and Major General Guy Carleton inspected the forces of Great Britain assembled at Saint-Jean-sur-Richelieu and about to embark upon and invasion of the American colonies from Canada. The force consisted of approximately 7,000 soldiers and 130 artillery pieces and was to travel southward through New York, by water and through the wilderness, to meet up with a second force moving north from New York City. The act of capturing the Hudson River Valley and, in particular, the city of Albany, would divide New England from the mid-Atlantic colonies, facilitating the defeat of the rebel forces in detail.

Poor communication may have doomed the plan from the start. The army which Burgoyne counted on moving up from New York City, under the command of General Howe, was committed to an attack on Philadelphia, to be executed via a southern approach. Thus, when it needed to be available to move north, Howe’s army would be separated from the upstate New York theater not only by distance, but also by George Washington. Burgoyne did not receive this important information and set out on his expedition unaware of this flaw.

Nonetheless, Burgoyne began well enough. As he moved southward, the colonial forces were unaware of his intent and strength and friendly native forces screened his army from observation. He captured Crown Point without opposition and successfully besieged and occupied Fort Ticonderoga. Following these successes, he embarked on an overland route from Skenesboro (at the southern reaches of Lake Champlain) to Fort Edward, where the Hudson River makes its turn south. This decision seemed to have been taken so as to avoid moving back northward, a retrograde movement necessary to use Lake George’s waterway. It may well also indicate Burgoyne’s lack of appreciation for the upstate New York terrain and its potential to allow a smaller colonial force to impede his movements.

Live Free or Die

In order to deal with the enemy blocking his path, Burgoyne sent forth his allied Indian forces to engage and run off the colonials. Having done so, they proceeded to loot and pillage the scattering of colonial settlements in the area. This had the perverse effect of driving otherwise-neutral locals into the rebel camp. As the fighting portion of his army made the trek to Fort Edward rather rapidly and uneventfully, Burgoyne discovered he had two serious issues. First, he finally received communication from Howe informing him that the bulk of the New York army, the forces with whom Burgoyne was planning to rendezvous, were on their way by sea to south of Philadelphia. Second, the movement through the wilderness had delayed his supply train, unsuited as it was to movement through through primal woodland.

Burgoyne’s solution was to again pause and to attempt to “live off the land” – requisitioning supplies and draft animals from the nearby settlers. Burgoyne also identified a supply depot at Bennington (Vermont) and directed a detachment to seize its bounty. What he didn’t know is that the settlers of Vermont had already successfully appealed to the government of New Hampshire for relief. New Hampshire’s General John Stark had, in less than a week’s time, assembled roughly 10% of New Hampshire’s fighting-age population to field a militia force of approximately 1,500.

When Burgoyne’s detachment arrived at Bennington, they found waiting for them a rebel militia more than twice their number. After some weather-induced delay, Stark’s force executed an envelopment of the British position, capturing many and killing his opposing commander. Meanwhile, reinforcements were arriving from both sides. The royal force arrived first and set upon the disarrayed colonial forces who were busy taking prisoners and gather up supplies. As Stark’s forces neared collapse, the Green Mountain Boys, under the command of Seth Warner, arrived and shored up the lines. The bloody engagement continued until nightfall, after which the royalists fell back to their main force, abandoning all their artillery.

Stark’s dramatic victory had several effects. First, it provided a shot in the arm for American morale, once again showing that the American militia forces were capable of standing up to the regular armies of Europe (Germans, in this case). Second, it had an opposing dilatory impact on the Indian tribe’s morale, causing the large, native force that Burgoyne had used for screening purposes to abandon him. Third, it created a folklore that persists in northern New England to this day. Stark became a hero with his various pre- and post- battle utterances preserved for the ages. Not the least of these was from a letter penned well after independence. Stark regretted that his ill health would prevent him from attending a Battle of Bennington veterans’ gathering. He closed his apology with the phrase, “Live free or die: Death is not the worst of evils,” which has been adopted as the official motto of the State of New Hampshire.

Saratoga

The delays put in the path of Burgoyne’s march gave the Colonials time to organize an opposition. New England’s rebellion found itself in a complex political environment, pitting the shock at the loss of Ticonderoga against the unexpected victory at Bennington. The result was a call-to-arms of the colonial militias which were assembled into a force of some 6,000 in the vicinity of Saratoga, New York. General Horatio Gates was dispatched by the Continental Congress to take charge of this force, which he did as of August 19th. His personality clashed with some of the other colonial generals including, perhaps most significantly, Philip Schuyler. Among the politicians in Philadelphia, Schuyler had taken much of the blame for the loss of Ticonderoga. Some even whispered accusations of treason. Schuyler departure deprived Gates with the knowledge of the area he was preparing to defend, hindering his efforts. Burgoyne focused his will to the south and was determined to capture Albany before winter set in. Going all-in, he abandoned the defense of his supply lines leading back northward and advanced his army towards Albany.

On September 7th, Gates moved northwards to establish his defense. He occupied terrain known as Bemis Heights, which commanded the road southward to Albany, and began fortifying the position. By September 18th, skirmishers from Burgoyne’s advancing army began to meet up against those of the colonists.

Having scouted the rebel lines, Burgoyne’s plan was to turn the rebel left. That left wing was under the command of one of Washington’s stars, General Benedict Arnold. Arnold and Gates were ill-suited for each other leaving Arnold to seek allies from among Schuyler’s old command structure, thus provoking even further conflict. Arnold’s keen eye and aggressive personality saw the weakness of the American left and he realized how Burgoyne might exploit it. He proposed to Gates that they advance from their position on the heights and meet Burgoyne in the thickly-wooded terrain, a move that would give an advantage to the militia. Gates, on the other hand, felt his best option was to fight from the entrenchments that he had prepared. After much debate, Gates authorized Arnold to reconnoiter the forward position where he encountered, and ultimately halted, the British advance in an engagement Freeman’s Farm.

Game Time

In my previous article, I talked about some new stuff I’d stumbled across in the realm of AI for chess. The reason I was stumbling around in the first place was a new game in which I’ve taken a keen interest. That game is Freeman’s Farm, from Worthington Games, and I find myself enamored with it. Unfortunately it is currently out of print (although they do appear to be gearing up for a second edition run).

How do I love thee? Let me count the ways. There are three different angles from which I view this game. In writing here today, I want to briefly address all three. To me, the game is interesting as a historical simulation, as a pastime/hobby, and as an exercise in “game theory.” These factors don’t always work in tandem, but I feel like they’ve all come together here – which is why I find myself so obsessed (particularly with the third aspect).

The downside for this game, piling on with some of the online reviews, is with its documentation. Even after two separate rule clarifications being put out by the developer, there remain ambiguities aplenty. The developer has explained that the manual was the product of the publisher and it seems like Worthington values brevity in their rule sets. In this case, Worthington may have taken it a bit too far. To me, though, this isn’t a deal-breaker. The combination of posted clarifications, online discussion, and the possibility of making house rules leave the game quite playable and one hopes that much improvement will be found in the second edition. Still, it is easy to see how a customer would be frustrated with a rule book that leaves so many questions unanswered.

Historical War Gaming

This product is part of the niche category that is historical wargaming. Games of this ilk invite very different measures of evaluation than other (and sometimes similar) board games. I suppose it goes without saying that a top metric for a historical wargame is how well it reflects the history. Does it accurately simulate the battle or war that is being portrayed? Does it effectively reproduce the command decisions that the generals and presidents may have made during the war in question? Alternatively, or maybe even more importantly, does is provide insight to the player about the event that is being modeled?

On this subject, I am not well placed to grade Freeman’s Farm. What I will say is that the designer created this game as an attempt to address these issues of realism and historicity. In his design notes, he explains how the game came about. He was writing a piece on historical games for which he was focusing on the Saratoga Campaign. As research, he studied “all the published games” addressing the topic, and found them to be lacking something(s).

I’ll not bother to restate the designer’s own words (which can be accessed directly via the publishers website). What is worth noting is that he has used a number of non-conventional mechanisms. The variable number of dice and the re-rolling option are not exactly unique, but they do tend to be associated with “wargame lite” designs or other “non-serious” games. Likewise, the heavy reliance on cards is a feature that does not cry out “simulation.” That said, I am not going to be too quick to judge. Probability curves are probability curves and all the different ways to roll the dice have their own pros and their own cons. Freeman’s Farm‘s method allows players to easily understand which factors are important but makes it very difficult to calculate exactly what are the optimal tactics. Compare and contrast, for example, to the gamey moves required to get into the next higher odds column on a traditional combat results table.

Playing for Fun

All the above aside, a game still needs to be playable and fun to succeed. We seem to be living through a renaissance in the board gaming world, at least among the hobbyists that have the requisite depth of appreciation. A vast number of well-designed and sophisticated board games are produced every year covering a huge expanse of themes. More importantly, ideas about what makes a game “fun” and “playable” have evolved such that today’s games are almost universally better than the games of a generation or two ago. Gone are the days (such as when I was growing up) when slapping a hot franchise onto a roll-the-dice-and-move-your-piece game was considered effective and successful game design. You can still buy such a thing, I suppose, but you can also indulge yourself with dozens and dozens of games based on design concepts invented and refined over the last couple of decades.

From this standpoint, Freeman’s Farm also seems to have hit the mark. It is a nice looking game using wooden blocks and a period-evoking, high quality game board. I’ve read some complaints on line about boxes having missing (or the wrong) pieces in them. This would definitely be a problem if you don’t notice it and try to play with, for example, the wrong mix of cards. The publisher seems to be responsive and quick to get people the material they need.

Game Theory

The real reason I’m writing about this now is because the game has a form that seems, at least to my eyes, to be a very interesting one from a theoretical standpoint. Contrast to, say, chess, and you’ll notice there are very few “spaces” on this game’s board. Furthermore, the movement of pieces between spaces is quite restricted. This all suggests to me that the model of the decision making in this game (e.g. a decision tree) would have a simplicity not typical for what we might call “serious” wargames.

Given the focus of that last post, I think this game would get some interesting results from the kind of modeling that AlphaZero used so successfully for chess. Of course, it is also wildly different from the games upon which that project has focused. The two most obvious deviations are that Freeman’s Farm uses extensive random elements (drawn cards, rolled dice) and that the game is not symmetric and non-alternating. My theory, here, is that the complex elements of the game will still adhere to the behavior of that core, and simple, tree. To make an analogy to industrial control, the game might just behave like a process with extensive noise obscuring its much-simpler function. If true, this is well within the strengths of the AI elements of Alpha Zero – namely the neural-net enhanced tree search.

Momentum and Morale

A key element to all three of these aspects revolves around rethinking on how to model command and control. It is novel enough that I wrap up this piece by considering this mechanism in detail. In place of some tried-and-true method for representing command, this game uses blocks as form of currency – blocks that players accumulate and then spent over the course of the game. Freeman’s Farm calls this momentum; a term helps illustrate its use. From a battlefield modeling and simulation standpoint, though, I’m not sure the term quite captures all that it does. Rather, the blocks are a sort of a catch-all for the elements that contribute to successful management of your army during the battle, looking at it from the perspective of a supreme commander. They are battlefield intelligence, they are focus and intent, and they are other phrases you’ve heard used to describe the art of command. Maybe the process of accumulating blocks represents getting inside your enemy’s OODA and the spending blocks is the exploitation of that advantage.

Most other elements in Freeman’s Farm only drain away as time goes by. For example, your units can lose strength and they can lose morale, but they can’t regain it (a special rule here and there aside). You have a finite number of activations, after which a unit is spent – done for the day, at least in an offensive capacity. Momentum, by contrast, builds as the game goes on – either to be saved up for a final push towards victory or dribbled out here and there to improve your game.

Now, I don’t want to go down a rabbit hole of trying to impose meaning where there probably isn’t any. What does it mean to decide that you don’t like where things are going and your going to sacrifice a bit of morale to achieve a better kill ratio? Although one can construct a narrative as to what that might mean (and maybe by doing so, you’ll enjoy the game more), that doesn’t mean it is all simulation. The point is, from a game standpoint, the designer has created a neat mechanism to engage the players in the process of rolling for combat results. It also allows a player to become increasingly invested in their game, even as it is taking away decisions they can make because their units have become engaged, weakened, and demoralized.

I’m going to want to come back and address this idea of modeling the game as a decision tree. How well can we apply the either decision trees and/or neural networks to the evaluation of game play? Is this, indeed, a simple yet practical application of these techniques? Or does the game’s apparent simplicity obscure a far-more complex reality that prevents the application of these computer learning techniques by being applied by someone who doesn’t have Deep Mind/Google’s computing resources? Maybe I’ll be able to answer some of these questions for myself.

Chess Piece Face

A couple of months after DeepMind announced their achievement with the AlphaZero chess-playing algorithm, I wrote an entry giving my own thinking on the matter. To a large extent, I wrote of my own self-adjustment from the previous generation of neural network technology to the then cutting-edge techniques which were used to create (arguably) the world’s best chess player. Some of the concepts were new to me, and I was trying to understand them even as I explained them in my own words. Other things were left vague in the summary paper, leaving me to fill in the blanks with guesses about how the programming team might have gone about their work.

It took about another year, but the team eventually released a fully-detailed paper explaining the AlphaZero architecture, even as tech-savvy amateurs (albeit folks far more competent than I) dissected the available information. Notably, this include the creation of open-source reimplementations of AlphaZero‘s proprietary technology. Now, more than three years on, there are many basic-level explanations, using diagrams and examples, explaining to non-experts what AlphaZero has done and how (for just one example, with diagram, see here).

I made a couple of serious mistakes in my previous analysis. Most importantly, I misunderstood the significance of AlphaZero‘s “general-purpose Monte-Carlo tree search (MCTS) algorithm.” At the time I first read that, I interpreted it as simply the ability to randomly-generate valid moves and thus to compile the list of possibilities which the neural network was to rank. An untrained neural net would rank its choices randomly and probably lose*, but could begin to learn how to win as better and better training sets were developed. That isn’t quite accurate. The better way to think about the AlphaZero solution is that it is fundamentally an MCTS algorithm (all caps), but one which uses a Neural Net as a short-cut to estimating parameters where they have yet to be calculated. I think I’ll want to come back to this later, but in the meantime an article using tic-tac-toe to explain MTCS might be educational.

My next error was in my understanding of convolutional networks. If you read my explanation, you might surmise that the source of my error was in my outdated understanding of neural network technology in general. I imagined the convolutional layer as being set of small neural-networks. Further reading (a well illustrated example is here) about the use of convolution in image processing instructed me on the use of filters. This convolution strategy uses transformations that have been well defined through years use for image manipulation and, in fact, often have familiar results to those of us who play with photographs through the likes of Photoshop or GIMP. That a filter which helps sharpen images for the human eye would also help a machine intelligence with edge detection makes sense. In the broader sense, it is a simple method of encapsulating concepts like proximity in a way that does not require a neural net training process to learn about it on its own.

I continue to believe that there is quite a bit of method, borne from extensive trial-and-error, in arranging the game state so that it makes sense, not only to the neural network, but also to those image processing filters. When the problem is identifying elements of an image, those real-life objects occupy distinct regions with the grid of pixels that make up that image. For the “concepts” about a chess board during a match, what is it that causes information to be co-located within a game-turn’s representation? It seems to me that the structure of the input arrays would have to be conceived while thinking that you are turning chess information into something that resembles a photograph. While the paper clearly states that the researchers DIDN’T find the results to depend upon data structure, it would seem this is a lot easier to get wrong than to get right.

I also note, assuming the illustration of AlphaZero‘s architecture in the above paper is correct, that the neural network is extraordinarily complex. At the same time, it seems arbitrarily selected in advance. The dimensions of the architecture are in nice round numbers – 256 convolution filters, 40 residual layers, etc. – which suggests to me one (or both) of two things. First, that these are probably borrowed architectural decisions drawn from long experience with image-processing neural networks. Second, that it may not have been necessary to “optimize” the architecture itself if the results were insensitive to the details. Of course, its also possible that when you’re drowning in excess computing power, you’d be more than happy to vastly oversize your neural network and then rely on the training process to trim the fat rather then invest the manpower to improve the architecture manually. Further, humans may make the right architectural decisions or the wrong one. The network training will, at all times, attempt to make the objectively-best decision AND, perhaps more importantly, continually re-evaluate those decisions with each new run. For example, if you have enough computing power and training data, the fact that 252 out of those 256 filters are excluded every time may not be particularly concerning. Especially if, at some point, a new generation of the network decides that, after all, one of those excluded 252 really is important.

I update my previous entry on this today, not because I think I finally understand it all but, instead, because I don’t. The new bits of understanding that I’ve come by, or maybe just think I’ve come by, suggest new and interesting ways to solve problems other than chess. Hopefully, over the next few weeks, I’ll be able to follow through with this line of thought and continue to explore some more interesting, and far less speculative, thoughts on using machine learning for solutions that are less academic and more accessible to the non-Google-funded hobbyist.

*Except that the networks train against an equally incompetent opponent. Which, if you think about it, is a fundamental flaw with my approach. The network would have no way to bootstrap itself into the realm of mildly-competent player.

Artificial, Yes, but Intelligent?

Keep your eyes on the road, your hands upon the wheel.

When I was in college, only one of my roommates had a car. The first time it snowed, he expounded upon the virtues of finding an empty and slippery parking lot and purposely putting your car into spins.  “The best thing about a snow storm,” he said. At the time I thought he was a little crazy. Later, when I had the chance to try it, I came to see it his way. Not only is it much fun to slip and slide (without the risk of actually hitting anything), but getting used to how the car feels when the back end slips away is the first step in learning how to fix it, should it happen when it actually matters.

Recently, I found myself in an empty, ice-covered parking lot and, remembering the primary virtue of a winter storm, I hit the gas and yanked on the wheel… but I didn’t slide. Instead, I encountered a bunch of beeping and flashing as the electronic stability control system on my newish vehicle kicked in. What a disappointment it was. It also got me thinkin’.

For a younger driver who will almost never encounter a loss-of-traction slip condition, how do they learn how to recover from a slide or a spin once it starts? Back in the dark ages, when I was learning to drive, most cars were rear-wheel-drive with a big, heavy engine in the front. It was impossible not to slide around a little when driving in a snow storm. It was almost a prerequisite to going out into the weather to know all the tricks of slippery driving conditions. Downshifting (or using those number gears on your automatic transmission), engine breaking, and counter steering were all part of getting from A to B. As a result*, when an unexpectedly slippery road surprises me, I instinctively take my foot off the brakes/gas and counter-steer without having to consciously remember the actual lessons. So does a car that prevents sliding 95% of the time result in a net increase in safety, even though it probably makes that other 5% worse? It’s not immediately obvious that it does.

On the Road

I was reminded of the whole experience a month or so ago when I read about the second self-driving car fatality. Both crashes happened within a week or so of each other in Western states; the first in Arizona and the second in California. In the second crash, Tesla’s semi-autonomous driving function was in fact engaged at the time of the crash and the drivers hands were not on the wheel six seconds prior. Additional details do not seem to be available from media reports, so the actual how and why must remain the subject of speculation. In the first, however, the media has engaged in the speculation for us. In Arizona, it was an Uber vehicle (a Volvo in this case) that was involved and the fatality was not the driver. The media has also reported quite a lot that went wrong. The pedestrian who was struck and killed was jaywalking, which certainly is a major factor in her resulting death. Walking out in front of a car at night is never a safe thing to do, whether or not that car self-driving. Secondly, video was released showing the driver was looking at something below the dashboard level immediately before the crash, and thus was not aware of the danger until the accident occurred. The self-driving system itself did not seem to take any evasive action.

Predictably, the Arizona state government responded by halting the Uber self-driving car program. More on that further down, but first look at the driver’s distraction.

After the video showing such was released, media attention focused on the distracted-driving angle of the crash. It also brought up the background of the driver, who had a number of violations behind him. Certainly the issue of electronics and technology detracting from safe driving is a hot topic and something, unlike self-driving Uber vehicles, that most of us encounter in our everyday lives. But I wonder if this exposes a fundamental flaw in the self-driving technology?

It’s not exactly analogous to my snow situation above, but I think the core question is the same. The current implementation of the self-driving car technology augments the human driver rather then replaces him or her. In doing so, however, it also removes some of the responsibility from the driver as well as making him more complacent about the dangers that he may be about to encounter. The more that the car does for the driver, the greater the risk that the driver will allow his attention to wander rather that stay focused, on the assumption that the autonomous system has him covered. In the longer term, are there aspects of driving that the driver will not only stop paying attention to, but lose the ability to manage in the way a driver of a non-automated car once did?

Naturally, all of this can be designed into the self-driving system itself. Even if a car is capable of, essentially, driving itself over a long stretch of a highway, it could be designed to engage the driver every so many seconds. Essentially requiring unnecessary input from the operator can be used to make sure she is ready to actively control the car if needed. I note that we aren’t breaking new ground here. A modern aircraft can virtually fly itself, and yet some part of the design (plus operational procedures) are surely in place to make sure that the pilots are ready when needed.

As I said, the governmental response has been to halt the program. In general, it will be the governmental response the will be the biggest hurdle for self-driving car technology.

In the specific case of Arizona, I’m not actually trying to second guess their decision. Presumably, they set up a legal framework for the testing of self-driving technology on the public roadways. If the accident in question exceeded any parameters of that legal framework, then the proper response would be to suspend the testing program. On the other hand, it may be that the testing framework had no contingencies built into it, in which case any injuries or fatalities would have to be evaluated as they happen. If so, a reactionary legal response may not be productive.

I think, going forward, there is going to be a political expectation that self-driving technology should be flawless. Or, at least, perfect enough that it will never cause a fatality. Never mind that there are 30-40,000 motor vehicle deaths per year in the United States and over a million per year world wide. It won’t be enough that an autonomous vehicle is safer than than a non-autonomous vehicle; it will have to be orders-of-magnitude safer. Take, as an example, passenger airline travel. Despite a rate that is probably about 10X safer for aircraft over cars, the regulatory environment for aircraft is much more stringent. Take away the “human” pilot (or driver) and I predict the requirements for safety will be much higher than for aviation.

Where I’m headed in all this is, I suppose, to answer the question about when we will see self driving cars. It is tempting to see that as a technological question – when will the technology be mature enough to be sold to consumers? But it is more than that.

I recall see somewhere an example of “artificial intelligence” for a vehicle system. The example was of a system that detected a ball rolling across the street being a trigger for logic that anticipates there might be a child chasing that ball. A good example of an important problem to solve before putting an autonomous car onto a residential street. Otherwise, one child run down while he was chasing his ball might be enough for a regulatory shutdown. But how about the other side of that coin? What happens the first time a car swerves to avoid a non-existent child and hits an entirely-existent parked car? Might that cause a regulatory shutdown too?

Is regulatory shutdown inevitable?

Robo-Soldiers

At roughly the same time that the self-driving car fatalities were in the news, there was another announcement, even more closely related to my previous post. Video-game developer EA posted a video showing the results of a multi-disciplinary effort to train a AI player for their Battlefield 1 game (which, despite the name is actually the fifth version of the Battlefield series). The narrative for this demo is similar to that of Google’s (DeepMind) chess program. The training was created, as the marketing pitch says, “from scratch using only trial and error.” Without viewing it, it would seem to run counter to my previous conclusions, when I figured that the supposed generic, self-taught AI was perhaps considerably less than it appeared.

Under closer examination, however, even the minute-and-a-half of demo video does not quite measure up to the headline hype, the assertion that neural nets have learned to play Battlefield, essentially, on their own. The video explains that the training methods involves manually placing rewards throughout the map to try to direct the behavior of the agent-controlled soldiers.

The time frame for a project like this one would seem to preclude them being directly inspired by DeepMind’s published results for chess. Indeed, the EA Technical Director explains that it was earlier DeepMind work with Atari games that first motivated them to apply the technology to Battlefield. Whereas the chess example demonstrated ability to play chess at a world class level, the EA project demonstration merely shows that the AI agents grasp the basics of game play and not much more. The team’s near-term aspirations are limited; use of AI for quality testing is named as an expected benefit of this project. He does go so far as to speculate that a few years out, the technology might be able to compete with human players within certain parameters. Once again, a far cry from a self-learning intelligence poised to take over the world.

Even still, the video demonstration offers a disclaimer. “EA uses AI techniques for entertainment purposes only. The AI discussed in this presentation is designed for use within video games, and cannot operate in the real world.”

Sounds like they wanted to nip any AI overlord talk in the bud.

From what I’ve seen of the Battlefield information, it is results only. There is no discussion of the methods used to create training data sets and design the neural network. Also absent is any information on how much effort was put into constructing this system that can learn “on its own.” I have a strong sense that it was a massive undertaking, but no data to back that up. When that process becomes automated (or even part of the self-evolution of a deep neural network), so that one can quickly go from a data set to a trained network (quickly in developer time, as opposed to computing time), the promise of the “generic intelligence” could start to materialize.

So, no, I’m not made nervous that an artificial intelligence is learning how to fight small unit actions. On the other hand, I am surprised at how quickly techniques seem to be spreading. Pleasantly surprised, I should add.

While the DeepMind program isn’t open for inspection, some of the fundamental tools are publicly available. As of late 2015, the Google library TensorFlow is available in open source. As of February this year, Google is making available (still in beta, as far as I know) their Tensor Processing Unit (TPU) as a cloud service. Among the higher-profile uses of TensorFlow is the app DeepFake, which allows its users to swap faces in video. A demonstration compares the apps performance, using a standard desktop PC and about a half-an-hour’s training time to produce something comparable to Industrial Light and Magic’s spooky-looking Princess Leia reconstruction.

Meanwhile, Facebook also has a project inspired by DeepMind’s earlier Go neural network system. In a challenge to Google’s secrecy, the Facebook project has been made completely open source allowing for complete inspection and participation in its experiments. Facebook announced results, at the beginning of May, of a 14-0 record of their AI bot against top-ranked Go players.

Competition and massive-online participation is bound to move this technology forward very rapidly.

 

The future’s uncertain and the end is always near.

 

*To be sure, I learned a few of those lessons the hard way, but that’s a tale for another day.

ABC Easy as 42

Teacher’s gonna show you how to get an ‘A’

In 1989, IBM hired a team of programmers out of Carnegie Mellon University. As part of his graduate program, team leader Feng-hsiung Hsu (aka Crazy Bird) developed a system for computerized chess playing that the team called Deep Thought. Deep Thought, the (albeit fictional) original, was the computer created in Douglas Adam’s The Hitchhiker’s Guide to the Galaxy to compute the answer for Life, the Universe, and Everything. It was successful in determining the answer was “42,” although it remained unknown what the question was. CMU’s Deep Thought, less ambitiously, was a custom designed hardware-and-software solution for solving the problem of optimal chess playing.

Once at IBM, the project was renamed Deep Blue, with the “Blue” being a reference to IBM’s nickname of “Big Blue.”

On February 10th, 1996, Deep Blue won its first game against a chess World Champion, defeating Garry Kasparov. Kasparov would go on to win the match, but the inevitability of AI superiority was established.

Today, computer programs being able to defeat humans is no longer in question. While the game of chess may never be solved (à la checkers), it is understood that the best computer programs are superior players to the best human beings. Within the chess world, computer programs only make news for things like when top players may using programs to gain an unfair advantage in tournament play.

Nevertheless, a chess-playing computer was in the news late last year. Headlines reported that a chess playing algorithm based on neural networks, starting only from the rules of legal chess moves, in four hours created a program that could beat any human and nearly all top-ranked chess programs. The articles spread across the internet through various media outlets, each summary featuring their own set of distortions and simplifications. In particular, writers that had been pushing articles about the impending loss of jobs to AI and robots jumped on this as proof that the end had come. Fortunately, most linked to the original paper rather than trying to decipher the details.

Like most I found this to be pretty intriguing news. Unfortunately, I also happen to know a little (just a little, really) about neural networks, and didn’t even bother to read the whole paper before I started trying to figure out what had happened.

Some more background on this project. It was created at DeepMind, a subsidiary of Alphabet, Inc. This entity, formerly known simply as Google, reformed itself in the summer of 2015 with the new Google being one of many children of the Alphabet parent. Initial information suggested to me an attempt at creating one held company for each letter of the alphabet, but time has shown that isn’t their direction. As of today, while there are many letters still open, several have multiple entries. Oh well, it sounded more fun my way. While naming a company “Alphabet” seems a bit uninspired, there is a certain logic to removing the name Google from the parent entity. No longer does one have to wonder why an internet company is developing self-driving cars.

Google’s self driving car?

 

The last time the world had an Artificial Intelligence craze was in the 1980s into the early 1990s. Neural networks were one of the popular machine intelligence techniques of that time too. At first they seemed to offer the promise of a true intelligence; simply mimicking the structure of a biological brain could produce an ability to generalize intelligence, without people to craft that intelligence in code. It was a program that could essentially teach itself. The applications for such systems seemed boundless.

Unfortunately, the optimism was quickly quashed. Neural networks had a number of flaws. First, they required huge amounts of “training” data. Neural Nets work by finding relationships within data, but that source data has to be voluminous and it has to be suited to teaching the neural network. The inputs had to be properly chosen, so as to work well with the networks’ manipulation of that data and the data themselves had be properly representative of the space being modeled. Furthermore, significant preprocessing was required from the person organizing the training. Additional inputs would result in exponential increases in both the training data requirement and the amount of processing time to run through the training.

It is worthwhile to recall the computer power available to neural net programmers of that time. Even a high-end server of 35 years ago is probably put to shame by the Xbox plugged into your television. Furthermore, the Xbox is better suited to the problem. The mathematics capability of Graphical Processing Units (GPUs) is a more efficient design for solving these kinds of matrix problems. Just like Bitcoin mining, it is the GPU on a computer that is going to best be able to handle neural network training.

To illustrate, let me consider briefly a “typical” neural network application of the previous generation. One use is something called a “soft sensor.” Another innovation of that same time was the rapid expansion in capabilities of higher-level control systems for industrial processes. For example, some kind of factory wide system could collect real-time data (temperatures, pressures, motor speeds – whatever is important) and present them in an organized fashion to give an overview of plant performance and, in some cases, automate overall plant control. For many systems however, the full picture wasn’t always available in real time.

Let’s imagine the production of a product which has a specification limiting the amount of some impurity. Largely, we know what the right operating parameters of the system are (temperatures, pressures, etc) but to actually measure for impurities, we manually draw a sample, send it off to a lab for testing, and wait a day or two for the result. It would stand to reason that, in order to keep your product within spec, you must operate far enough away from the threshold that if it begins to drift, you would usually have time catch it before it goes out of spec. Not only does that mean you’re, most of the time, producing a product that exceeds specification (presumably at extra cost), but if the process ever moves faster than expected, you may have to trash a day’s worth of production created while you were waiting for lab results.

Enter the neural network and that soft sensor. We can create a database of the data that were collected in real time and correlate that data with the matching sample analyses that were available afterward. Then a neural network can be trained using the real-time measurements as input to produce an output predicting sample measurement. Assuming that the lab measurement is deducible from the on-line data, you now have in your automated control system (or even just as a presentation to the operators) a real time “measurement” of data that otherwise won’t be available until much later. Armed with that extra knowledge, you would expect to both cut operating costs (by operating tighter to specification) and prevent waste (by avoiding out-of-spec conditions before they happen).

That sounds very impressive, but I did use the word “assuming.” There were a lot factors that had to come together before determining that a particular problem was solvable with neural networks. Obviously, the result you are trying to predict has to, indeed, be predictable from the data that you have. What this meant in practice is that implementing neural networks was much bigger than just the software project. It often meant redesigning your system to, for example, collect data on aspects of your operation that were never necessary for control, but are necessary for the predictive functioning of the neural net. You also need lots and lots of data. Operations that collected data slowly or inconsistently might not be capable of providing a data set suitable for training. Another gotcha was that collecting data from a system in operation probably meant that said system was already being controlled. Therefore, a neural net could just as easily be learning how your control system works, rather than the underlying fundamentals of your process. In fact, if your control reactions were consistent, that might be a much easier thing for the neural net to learn that the more subtle and variable physical process.

The result was that many applications weren’t suitable for neural networks and others required a lot of prep-work. Projects might begin with redesigning the data collection system to get more and better data. Good data sets in hand, one now was forced into time-intensive data analysis which was necessary to ensure a good training set. For example, it was often useful to pre-analyze the inputs to eliminate any dependent variables. Now, technically, that’s part of what the neural network should be good at – extracting the core dependencies from a complex system. However, the amount of effort – in data collected and training time – increases exponentially when you add inputs and hidden nodes, so simplifying a problem was well worth the effort. While it might seem like you can always just collect more data, remember that the data needed to be representative of the domain space.  For example, if the condition that results in your process wandering off-spec only occurs once every three or four months, then doubling your complexity might mean (depending on your luck) increasing the data collection from a month or two to over a year.

Hopefully you’ll excuse my trip down a neural net memory lane, but I wanted to set your expectations of neural network technology where mine were, because the state of the art is very different than what it was. We’ve probably all seen some of the results with image recognition that seems to be one of the hottest topics in neural networks these days.

So back to when I read the article. My first thought was to think in terms of the neural network technology as I was familiar with it.

My starting point to design my own chess neural net has to be representations of the board layout. If you know chess, you probably have a pretty good idea how to describe a chess board. You can describe each piece using a pretty concise terminology . In this case, I figure it is irrelevant where a piece has been. So whether it started as a king’s knight’s pawn or a queen’s rook’s pawn, that doesn’t effect its performance. So you have 6 possible piece descriptors which need to be placed into the 64 squares that they could possibly reside upon. So, for example, imagine that I’m going to assign an integer to the pieces, and then use positive for white and negative for black:

Pawn Knight Bishop Rook King Queen
1 2 3 4 5 6

My board might look something like this 4,2,3,6,5,3,2,4,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0…-3,-2,-4

If I am still living in the 90s, I’m immediately going to be worried about the amount of data, and might wonder if I can compress the representation of my board based on assumptions about the starting positions. I’ve got all those zeros in the center of my matrix, and as the game progresses, I’m going to be getting fewer data and more zeros. Sixty-four inputs seems like a lot (double that to get current position and post-move position), and I might hope to winnow that down some manageable figure with the kind of efforts that I talked about above.

If I hadn’t realized my problem already, I’d start to figure it out now. Neural networks like inputs to be proportional. Obviously, binary inputs are good – something either affects the prediction or doesn’t. But for variable inputs, the variation must make sense in terms of the problem you are solving. Using the power output of a pump as an input to a neural network makes sense. Using the model number of that pump, as an integer, wouldn’t make sense unless there is a happenstancial relationship between the serial number and some function meaningful to your process. Going back to my board description above, I could theoretically describe the “power” of my piece with a number between 1 and 10 (as an example), but any errors in my ability to accurately rank my pieces contribute to prediction errors. So is a Queen worth six times a pawn or nine? Get that wrong, and my neural net training has an inaccuracy built in right up front. And, by the way, that means “worth” to the neural net, not to me or other human players.

A much better way to represent a chess game to a mathematical “intelligence” is to describe the pieces. So, for example, each piece could be described with two inputs, describing its deviation from that piece’s starting position in the X and Y axes, with perhaps a third node to indicate whether the piece is on the board or captured. My starting board then becomes, by definition, 96 zeros, with numbers being populated (and generally growing) as the pieces move. It’s not terribly bigger (although rather horrifyingly so to my 90s self) than the representation by board, and I could easily get them on par by saying, for example, that pieces captured are moved elsewhere on the board, but well out of the 8X8 grid. Organizing by the pieces, though, is both non-intuitive for we human chess players and, in general, would seem less efficient in generalizing to other games. For example, if I’m modelling a card game (as I talked about in my previous post), describing every card, and each of their possible positions; that is a much bigger data set than just describing what is in each hand and on the table. But, again, it should be clear that the description of the board is going to be considerably less meaningful as a mathematical entity than the description created by working from each game piece.

At this point, it is worth remembering again that this is no longer 1992. I briefly mentioned the advances in computing, both in power and in structure (the GPU architecture as superior for solving matrix math). That, in turn, has advanced the state of the art in neural network design and training. The combination goes a long way in explaining why image recognition is once again eyed as a problem for neural networks to address.

Consider the typical image. It is a huge number of pixels of (usually) highly-compressible data. But compressing the data will, as described above, befuddle the neural network. On the other hand, those huge, sparse matrices need representative training data to evenly cover the huge number of inputs, with that need increasing geometrically. It can quickly become, simply, too much of a problem to solve in a timely manner no matter what kind of computing power you’ve got to throw at it. But with that power, you can do new and interesting things. A solution for image recognition is to use “convolutional” networks.

Not to try to be too technically correct, I’ll try to capture the essence of this technique. The idea is that the input space can be broken up into sub-spaces (in an image, small fractions of the image), that then feed a significantly smaller neural network. Then, one might assume that those small networks are all the same or similar to each other. For an image recognition, we might train 100s or even 1000s of networks operating on 1% of the image (in overlapping segments), creating a (relatively) small output based on the large number of pixels. Then those outputs feed a whole-image network. It is still a massive computational problem, but immensely smaller than the problem of training a network processing the entire image as the input.

Does that make a chess problem solvable? It should help, especially if you have multiple convolutional layers. So there might be a neural network that describes each piece (6 inputs for old/new position (2D) plus on/off board) and reduces it to maybe 3 outputs. A second could map similar pieces.. where are the bishops? where are the pawns? Another sub-network, repeated twice, could try just looking at one player at a time. It is still a huge problem, but I can see that it’s something that is becoming solvable given some time an effort.

Of course, this is Alphabet, Inc we are talking about. They’ve got endless supplies of (computing) time and (employee) effort, so if it is starting to look doable to a mere human like me, it is certainly doable for them.

At this point, I went back to the research paper wherein I discovered that some of my intuition was right, although I didn’t fully appreciate that last point. Just as a simple example, the input layer for the DeepMind system is to represent each piece as a board showing the position of the piece. So 32X a 64-by-64 positional grid. They also use a history of turns, not just current and next turn. It is orders-of-magnitude more data than I anticipated, but in extremely sparse data sets. In fact, it looks very much like image processing, but with much more ordered images (to a computer mind, at least). The paper states they are using Tensor Processing Units, a Google concoction meant to use hardware having similar advantages to the GPU and it’s matrix-math specialization, but further optimized specifically to solve this kind of neural network training problem.

So lets finally go back to the claim that got all those singularity-is-nigh dreams dancing in the heads of internet commentators. The DeepMind team were able to train in a matter of (really) twenty-four hours a superhuman level chess player with no a priori chess knowledge. Further, the paper states that the training set consists of 800 randomly-generate games (constrained only to be made up of legal moves), which seems like an incredibly small data set. Even realizing how big those representations are (with their sparse descriptions of the piece locations as well as per-piece historical information), it all sounds awfully impressive. Of course, that is 800 games per iteration. If I’m reading right, that might be 700k iterations in over 9 hours using hardware nearly inconceivable to we mortals.

And that’s just the end result of a research project that took how long? To get to that point where they could hit the “run” button took certainly months, and probably years.

First you’ve got to come up with the data format, and the ability to generate games in that format. Surprisingly, the paper says that the exact representation wasn’t a significant factor. I suppose that it an advantage of its sparseness. Next, you’ve got to architect that neural net. How many convolutions over what subsets? How many layers? How many nodes? That’s a huge research project, and one that is going to need huge amounts of data – not the 800 randomly generated games you used at the end of it all.

The end result of all this – after a process involving a huge number of PhD hours and petaFLOPS of computational power – you’ve created a brain that can do one thing; learn about chess games. Yes, it is a brain without any knowledge in it – a tabula rasa – but it is a brain that is absolutely useless if provided knowledge about anything other than playing chess.

It’s still a fabulous achievement, no doubt. It is also research that is going to be useful to any number of AI learning projects going forward. But what it isn’t is any kind of demonstration that computers can out-perform people (or even mice, for that matter) in generic learning applications. It isn’t a demonstration that neural nets are being advanced into the area of general learning. This is not an Artificial Intelligence that that could be, essentially, self-teaching and therefore life-like in terms of its capabilities.

And, just to let the press know, it isn’t the end of the world.