Artificial, Yes, but Intelligent?

Keep your eyes on the road, your hands upon the wheel.

When I was in college, only one of my roommates had a car. The first time it snowed, he expounded upon the virtues of finding an empty and slippery parking lot and purposely putting your car into spins.  “The best thing about a snow storm,” he said. At the time I thought he was a little crazy. Later, when I had the chance to try it, I came to see it his way. Not only is it much fun to slip and slide (without the risk of actually hitting anything), but getting used to how the car feels when the back end slips away is the first step in learning how to fix it, should it happen when it actually matters.

Recently, I found myself in an empty, ice-covered parking lot and, remembering the primary virtue of a winter storm, I hit the gas and yanked on the wheel… but I didn’t slide. Instead, I encountered a bunch of beeping and flashing as the electronic stability control system on my newish vehicle kicked in. What a disappointment it was. It also got me thinkin’.

For a younger driver who will almost never encounter a loss-of-traction slip condition, how do they learn how to recover from a slide or a spin once it starts? Back in the dark ages, when I was learning to drive, most cars were rear-wheel-drive with a big, heavy engine in the front. It was impossible not to slide around a little when driving in a snow storm. It was almost a prerequisite to going out into the weather to know all the tricks of slippery driving conditions. Downshifting (or using those number gears on your automatic transmission), engine breaking, and counter steering were all part of getting from A to B. As a result*, when an unexpectedly slippery road surprises me, I instinctively take my foot off the brakes/gas and counter-steer without having to consciously remember the actual lessons. So does a car that prevents sliding 95% of the time result in a net increase in safety, even though it probably makes that other 5% worse? It’s not immediately obvious that it does.

On the Road

I was reminded of the whole experience a month or so ago when I read about the second self-driving car fatality. Both crashes happened within a week or so of each other in Western states; the first in Arizona and the second in California. In the second crash, Tesla’s semi-autonomous driving function was in fact engaged at the time of the crash and the drivers hands were not on the wheel six seconds prior. Additional details do not seem to be available from media reports, so the actual how and why must remain the subject of speculation. In the first, however, the media has engaged in the speculation for us. In Arizona, it was an Uber vehicle (a Volvo in this case) that was involved and the fatality was not the driver. The media has also reported quite a lot that went wrong. The pedestrian who was struck and killed was jaywalking, which certainly is a major factor in her resulting death. Walking out in front of a car at night is never a safe thing to do, whether or not that car self-driving. Secondly, video was released showing the driver was looking at something below the dashboard level immediately before the crash, and thus was not aware of the danger until the accident occurred. The self-driving system itself did not seem to take any evasive action.

Predictably, the Arizona state government responded by halting the Uber self-driving car program. More on that further down, but first look at the driver’s distraction.

After the video showing such was released, media attention focused on the distracted-driving angle of the crash. It also brought up the background of the driver, who had a number of violations behind him. Certainly the issue of electronics and technology detracting from safe driving is a hot topic and something, unlike self-driving Uber vehicles, that most of us encounter in our everyday lives. But I wonder if this exposes a fundamental flaw in the self-driving technology?

It’s not exactly analogous to my snow situation above, but I think the core question is the same. The current implementation of the self-driving car technology augments the human driver rather then replaces him or her. In doing so, however, it also removes some of the responsibility from the driver as well as making him more complacent about the dangers that he may be about to encounter. The more that the car does for the driver, the greater the risk that the driver will allow his attention to wander rather that stay focused, on the assumption that the autonomous system has him covered. In the longer term, are there aspects of driving that the driver will not only stop paying attention to, but lose the ability to manage in the way a driver of a non-automated car once did?

Naturally, all of this can be designed into the self-driving system itself. Even if a car is capable of, essentially, driving itself over a long stretch of a highway, it could be designed to engage the driver every so many seconds. Essentially requiring unnecessary input from the operator can be used to make sure she is ready to actively control the car if needed. I note that we aren’t breaking new ground here. A modern aircraft can virtually fly itself, and yet some part of the design (plus operational procedures) are surely in place to make sure that the pilots are ready when needed.

As I said, the governmental response has been to halt the program. In general, it will be the governmental response the will be the biggest hurdle for self-driving car technology.

In the specific case of Arizona, I’m not actually trying to second guess their decision. Presumably, they set up a legal framework for the testing of self-driving technology on the public roadways. If the accident in question exceeded any parameters of that legal framework, then the proper response would be to suspend the testing program. On the other hand, it may be that the testing framework had no contingencies built into it, in which case any injuries or fatalities would have to be evaluated as they happen. If so, a reactionary legal response may not be productive.

I think, going forward, there is going to be a political expectation that self-driving technology should be flawless. Or, at least, perfect enough that it will never cause a fatality. Never mind that there are 30-40,000 motor vehicle deaths per year in the United States and over a million per year world wide. It won’t be enough that an autonomous vehicle is safer than than a non-autonomous vehicle; it will have to be orders-of-magnitude safer. Take, as an example, passenger airline travel. Despite a rate that is probably about 10X safer for aircraft over cars, the regulatory environment for aircraft is much more stringent. Take away the “human” pilot (or driver) and I predict the requirements for safety will be much higher than for aviation.

Where I’m headed in all this is, I suppose, to answer the question about when we will see self driving cars. It is tempting to see that as a technological question – when will the technology be mature enough to be sold to consumers? But it is more than that.

I recall see somewhere an example of “artificial intelligence” for a vehicle system. The example was of a system that detected a ball rolling across the street being a trigger for logic that anticipates there might be a child chasing that ball. A good example of an important problem to solve before putting an autonomous car onto a residential street. Otherwise, one child run down while he was chasing his ball might be enough for a regulatory shutdown. But how about the other side of that coin? What happens the first time a car swerves to avoid a non-existent child and hits an entirely-existent parked car? Might that cause a regulatory shutdown too?

Is regulatory shutdown inevitable?

Robo-Soldiers

At roughly the same time that the self-driving car fatalities were in the news, there was another announcement, even more closely related to my previous post. Video-game developer EA posted a video showing the results of a multi-disciplinary effort to train a AI player for their Battlefield 1 game (which, despite the name is actually the fifth version of the Battlefield series). The narrative for this demo is similar to that of Google’s (DeepMind) chess program. The training was created, as the marketing pitch says, “from scratch using only trial and error.” Without viewing it, it would seem to run counter to my previous conclusions, when I figured that the supposed generic, self-taught AI was perhaps considerably less than it appeared.

Under closer examination, however, even the minute-and-a-half of demo video does not quite measure up to the headline hype, the assertion that neural nets have learned to play Battlefield, essentially, on their own. The video explains that the training methods involves manually placing rewards throughout the map to try to direct the behavior of the agent-controlled soldiers.

The time frame for a project like this one would seem to preclude them being directly inspired by DeepMind’s published results for chess. Indeed, the EA Technical Director explains that it was earlier DeepMind work with Atari games that first motivated them to apply the technology to Battlefield. Whereas the chess example demonstrated ability to play chess at a world class level, the EA project demonstration merely shows that the AI agents grasp the basics of game play and not much more. The team’s near-term aspirations are limited; use of AI for quality testing is named as an expected benefit of this project. He does go so far as to speculate that a few years out, the technology might be able to compete with human players within certain parameters. Once again, a far cry from a self-learning intelligence poised to take over the world.

Even still, the video demonstration offers a disclaimer. “EA uses AI techniques for entertainment purposes only. The AI discussed in this presentation is designed for use within video games, and cannot operate in the real world.”

Sounds like they wanted to nip any AI overlord talk in the bud.

From what I’ve seen of the Battlefield information, it is results only. There is no discussion of the methods used to create training data sets and design the neural network. Also absent is any information on how much effort was put into constructing this system that can learn “on its own.” I have a strong sense that it was a massive undertaking, but no data to back that up. When that process becomes automated (or even part of the self-evolution of a deep neural network), so that one can quickly go from a data set to a trained network (quickly in developer time, as opposed to computing time), the promise of the “generic intelligence” could start to materialize.

So, no, I’m not made nervous that an artificial intelligence is learning how to fight small unit actions. On the other hand, I am surprised at how quickly techniques seem to be spreading. Pleasantly surprised, I should add.

While the DeepMind program isn’t open for inspection, some of the fundamental tools are publicly available. As of late 2015, the Google library TensorFlow is available in open source. As of February this year, Google is making available (still in beta, as far as I know) their Tensor Processing Unit (TPU) as a cloud service. Among the higher-profile uses of TensorFlow is the app DeepFake, which allows its users to swap faces in video. A demonstration compares the apps performance, using a standard desktop PC and about a half-an-hour’s training time to produce something comparable to Industrial Light and Magic’s spooky-looking Princess Leia reconstruction.

Meanwhile, Facebook also has a project inspired by DeepMind’s earlier Go neural network system. In a challenge to Google’s secrecy, the Facebook project has been made completely open source allowing for complete inspection and participation in its experiments. Facebook announced results, at the beginning of May, of a 14-0 record of their AI bot against top-ranked Go players.

Competition and massive-online participation is bound to move this technology forward very rapidly.

 

The future’s uncertain and the end is always near.

 

*To be sure, I learned a few of those lessons the hard way, but that’s a tale for another day.

ABC Easy as 42

Teacher’s gonna show you how to get an ‘A’

In 1989, IBM hired a team of programmers out of Carnegie Mellon University. As part of his graduate program, team leader Feng-hsiung Hsu (aka Crazy Bird) developed a system for computerized chess playing that the team called Deep Thought. Deep Thought, the (albeit fictional) original, was the computer created in Douglas Adam’s The Hitchhiker’s Guide to the Galaxy to compute the answer for Life, the Universe, and Everything. It was successful in determining the answer was “42,” although it remained unknown what the question was. CMU’s Deep Thought, less ambitiously, was a custom designed hardware-and-software solution for solving the problem of optimal chess playing.

Once at IBM, the project was renamed Deep Blue, with the “Blue” being a reference to IBM’s nickname of “Big Blue.”

On February 10th, 1996, Deep Blue won its first game against a chess World Champion, defeating Garry Kasparov. Kasparov would go on to win the match, but the inevitability of AI superiority was established.

Today, computer programs being able to defeat humans is no longer in question. While the game of chess may never be solved (à la checkers), it is understood that the best computer programs are superior players to the best human beings. Within the chess world, computer programs only make news for things like when top players may using programs to gain an unfair advantage in tournament play.

Nevertheless, a chess-playing computer was in the news late last year. Headlines reported that a chess playing algorithm based on neural networks, starting only from the rules of legal chess moves, in four hours created a program that could beat any human and nearly all top-ranked chess programs. The articles spread across the internet through various media outlets, each summary featuring their own set of distortions and simplifications. In particular, writers that had been pushing articles about the impending loss of jobs to AI and robots jumped on this as proof that the end had come. Fortunately, most linked to the original paper rather than trying to decipher the details.

Like most I found this to be pretty intriguing news. Unfortunately, I also happen to know a little (just a little, really) about neural networks, and didn’t even bother to read the whole paper before I started trying to figure out what had happened.

Some more background on this project. It was created at DeepMind, a subsidiary of Alphabet, Inc. This entity, formerly known simply as Google, reformed itself in the summer of 2015 with the new Google being one of many children of the Alphabet parent. Initial information suggested to me an attempt at creating one held company for each letter of the alphabet, but time has shown that isn’t their direction. As of today, while there are many letters still open, several have multiple entries. Oh well, it sounded more fun my way. While naming a company “Alphabet” seems a bit uninspired, there is a certain logic to removing the name Google from the parent entity. No longer does one have to wonder why an internet company is developing self-driving cars.

Google’s self driving car?

 

The last time the world had an Artificial Intelligence craze was in the 1980s into the early 1990s. Neural networks were one of the popular machine intelligence techniques of that time too. At first they seemed to offer the promise of a true intelligence; simply mimicking the structure of a biological brain could produce an ability to generalize intelligence, without people to craft that intelligence in code. It was a program that could essentially teach itself. The applications for such systems seemed boundless.

Unfortunately, the optimism was quickly quashed. Neural networks had a number of flaws. First, they required huge amounts of “training” data. Neural Nets work by finding relationships within data, but that source data has to be voluminous and it has to be suited to teaching the neural network. The inputs had to be properly chosen, so as to work well with the networks’ manipulation of that data and the data themselves had be properly representative of the space being modeled. Furthermore, significant preprocessing was required from the person organizing the training. Additional inputs would result in exponential increases in both the training data requirement and the amount of processing time to run through the training.

It is worthwhile to recall the computer power available to neural net programmers of that time. Even a high-end server of 35 years ago is probably put to shame by the Xbox plugged into your television. Furthermore, the Xbox is better suited to the problem. The mathematics capability of Graphical Processing Units (GPUs) is a more efficient design for solving these kinds of matrix problems. Just like Bitcoin mining, it is the GPU on a computer that is going to best be able to handle neural network training.

To illustrate, let me consider briefly a “typical” neural network application of the previous generation. One use is something called a “soft sensor.” Another innovation of that same time was the rapid expansion in capabilities of higher-level control systems for industrial processes. For example, some kind of factory wide system could collect real-time data (temperatures, pressures, motor speeds – whatever is important) and present them in an organized fashion to give an overview of plant performance and, in some cases, automate overall plant control. For many systems however, the full picture wasn’t always available in real time.

Let’s imagine the production of a product which has a specification limiting the amount of some impurity. Largely, we know what the right operating parameters of the system are (temperatures, pressures, etc) but to actually measure for impurities, we manually draw a sample, send it off to a lab for testing, and wait a day or two for the result. It would stand to reason that, in order to keep your product within spec, you must operate far enough away from the threshold that if it begins to drift, you would usually have time catch it before it goes out of spec. Not only does that mean you’re, most of the time, producing a product that exceeds specification (presumably at extra cost), but if the process ever moves faster than expected, you may have to trash a day’s worth of production created while you were waiting for lab results.

Enter the neural network and that soft sensor. We can create a database of the data that were collected in real time and correlate that data with the matching sample analyses that were available afterward. Then a neural network can be trained using the real-time measurements as input to produce an output predicting sample measurement. Assuming that the lab measurement is deducible from the on-line data, you now have in your automated control system (or even just as a presentation to the operators) a real time “measurement” of data that otherwise won’t be available until much later. Armed with that extra knowledge, you would expect to both cut operating costs (by operating tighter to specification) and prevent waste (by avoiding out-of-spec conditions before they happen).

That sounds very impressive, but I did use the word “assuming.” There were a lot factors that had to come together before determining that a particular problem was solvable with neural networks. Obviously, the result you are trying to predict has to, indeed, be predictable from the data that you have. What this meant in practice is that implementing neural networks was much bigger than just the software project. It often meant redesigning your system to, for example, collect data on aspects of your operation that were never necessary for control, but are necessary for the predictive functioning of the neural net. You also need lots and lots of data. Operations that collected data slowly or inconsistently might not be capable of providing a data set suitable for training. Another gotcha was that collecting data from a system in operation probably meant that said system was already being controlled. Therefore, a neural net could just as easily be learning how your control system works, rather than the underlying fundamentals of your process. In fact, if your control reactions were consistent, that might be a much easier thing for the neural net to learn that the more subtle and variable physical process.

The result was that many applications weren’t suitable for neural networks and others required a lot of prep-work. Projects might begin with redesigning the data collection system to get more and better data. Good data sets in hand, one now was forced into time-intensive data analysis which was necessary to ensure a good training set. For example, it was often useful to pre-analyze the inputs to eliminate any dependent variables. Now, technically, that’s part of what the neural network should be good at – extracting the core dependencies from a complex system. However, the amount of effort – in data collected and training time – increases exponentially when you add inputs and hidden nodes, so simplifying a problem was well worth the effort. While it might seem like you can always just collect more data, remember that the data needed to be representative of the domain space.  For example, if the condition that results in your process wandering off-spec only occurs once every three or four months, then doubling your complexity might mean (depending on your luck) increasing the data collection from a month or two to over a year.

Hopefully you’ll excuse my trip down a neural net memory lane, but I wanted to set your expectations of neural network technology where mine were, because the state of the art is very different than what it was. We’ve probably all seen some of the results with image recognition that seems to be one of the hottest topics in neural networks these days.

So back to when I read the article. My first thought was to think in terms of the neural network technology as I was familiar with it.

My starting point to design my own chess neural net has to be representations of the board layout. If you know chess, you probably have a pretty good idea how to describe a chess board. You can describe each piece using a pretty concise terminology . In this case, I figure it is irrelevant where a piece has been. So whether it started as a king’s knight’s pawn or a queen’s rook’s pawn, that doesn’t effect its performance. So you have 6 possible piece descriptors which need to be placed into the 64 squares that they could possibly reside upon. So, for example, imagine that I’m going to assign an integer to the pieces, and then use positive for white and negative for black:

Pawn Knight Bishop Rook King Queen
1 2 3 4 5 6

My board might look something like this 4,2,3,6,5,3,2,4,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0…-3,-2,-4

If I am still living in the 90s, I’m immediately going to be worried about the amount of data, and might wonder if I can compress the representation of my board based on assumptions about the starting positions. I’ve got all those zeros in the center of my matrix, and as the game progresses, I’m going to be getting fewer data and more zeros. Sixty-four inputs seems like a lot (double that to get current position and post-move position), and I might hope to winnow that down some manageable figure with the kind of efforts that I talked about above.

If I hadn’t realized my problem already, I’d start to figure it out now. Neural networks like inputs to be proportional. Obviously, binary inputs are good – something either affects the prediction or doesn’t. But for variable inputs, the variation must make sense in terms of the problem you are solving. Using the power output of a pump as an input to a neural network makes sense. Using the model number of that pump, as an integer, wouldn’t make sense unless there is a happenstancial relationship between the serial number and some function meaningful to your process. Going back to my board description above, I could theoretically describe the “power” of my piece with a number between 1 and 10 (as an example), but any errors in my ability to accurately rank my pieces contribute to prediction errors. So is a Queen worth six times a pawn or nine? Get that wrong, and my neural net training has an inaccuracy built in right up front. And, by the way, that means “worth” to the neural net, not to me or other human players.

A much better way to represent a chess game to a mathematical “intelligence” is to describe the pieces. So, for example, each piece could be described with two inputs, describing its deviation from that piece’s starting position in the X and Y axes, with perhaps a third node to indicate whether the piece is on the board or captured. My starting board then becomes, by definition, 96 zeros, with numbers being populated (and generally growing) as the pieces move. It’s not terribly bigger (although rather horrifyingly so to my 90s self) than the representation by board, and I could easily get them on par by saying, for example, that pieces captured are moved elsewhere on the board, but well out of the 8X8 grid. Organizing by the pieces, though, is both non-intuitive for we human chess players and, in general, would seem less efficient in generalizing to other games. For example, if I’m modelling a card game (as I talked about in my previous post), describing every card, and each of their possible positions; that is a much bigger data set than just describing what is in each hand and on the table. But, again, it should be clear that the description of the board is going to be considerably less meaningful as a mathematical entity than the description created by working from each game piece.

At this point, it is worth remembering again that this is no longer 1992. I briefly mentioned the advances in computing, both in power and in structure (the GPU architecture as superior for solving matrix math). That, in turn, has advanced the state of the art in neural network design and training. The combination goes a long way in explaining why image recognition is once again eyed as a problem for neural networks to address.

Consider the typical image. It is a huge number of pixels of (usually) highly-compressible data. But compressing the data will, as described above, befuddle the neural network. On the other hand, those huge, sparse matrices need representative training data to evenly cover the huge number of inputs, with that need increasing geometrically. It can quickly become, simply, too much of a problem to solve in a timely manner no matter what kind of computing power you’ve got to throw at it. But with that power, you can do new and interesting things. A solution for image recognition is to use “convolutional” networks.

Not to try to be too technically correct, I’ll try to capture the essence of this technique. The idea is that the input space can be broken up into sub-spaces (in an image, small fractions of the image), that then feed a significantly smaller neural network. Then, one might assume that those small networks are all the same or similar to each other. For an image recognition, we might train 100s or even 1000s of networks operating on 1% of the image (in overlapping segments), creating a (relatively) small output based on the large number of pixels. Then those outputs feed a whole-image network. It is still a massive computational problem, but immensely smaller than the problem of training a network processing the entire image as the input.

Does that make a chess problem solvable? It should help, especially if you have multiple convolutional layers. So there might be a neural network that describes each piece (6 inputs for old/new position (2D) plus on/off board) and reduces it to maybe 3 outputs. A second could map similar pieces.. where are the bishops? where are the pawns? Another sub-network, repeated twice, could try just looking at one player at a time. It is still a huge problem, but I can see that it’s something that is becoming solvable given some time an effort.

Of course, this is Alphabet, Inc we are talking about. They’ve got endless supplies of (computing) time and (employee) effort, so if it is starting to look doable to a mere human like me, it is certainly doable for them.

At this point, I went back to the research paper wherein I discovered that some of my intuition was right, although I didn’t fully appreciate that last point. Just as a simple example, the input layer for the DeepMind system is to represent each piece as a board showing the position of the piece. So 32X a 64-by-64 positional grid. They also use a history of turns, not just current and next turn. It is orders-of-magnitude more data than I anticipated, but in extremely sparse data sets. In fact, it looks very much like image processing, but with much more ordered images (to a computer mind, at least). The paper states they are using Tensor Processing Units, a Google concoction meant to use hardware having similar advantages to the GPU and it’s matrix-math specialization, but further optimized specifically to solve this kind of neural network training problem.

So lets finally go back to the claim that got all those singularity-is-nigh dreams dancing in the heads of internet commentators. The DeepMind team were able to train in a matter of (really) twenty-four hours a superhuman level chess player with no a priori chess knowledge. Further, the paper states that the training set consists of 800 randomly-generate games (constrained only to be made up of legal moves), which seems like an incredibly small data set. Even realizing how big those representations are (with their sparse descriptions of the piece locations as well as per-piece historical information), it all sounds awfully impressive. Of course, that is 800 games per iteration. If I’m reading right, that might be 700k iterations in over 9 hours using hardware nearly inconceivable to we mortals.

And that’s just the end result of a research project that took how long? To get to that point where they could hit the “run” button took certainly months, and probably years.

First you’ve got to come up with the data format, and the ability to generate games in that format. Surprisingly, the paper says that the exact representation wasn’t a significant factor. I suppose that it an advantage of its sparseness. Next, you’ve got to architect that neural net. How many convolutions over what subsets? How many layers? How many nodes? That’s a huge research project, and one that is going to need huge amounts of data – not the 800 randomly generated games you used at the end of it all.

The end result of all this – after a process involving a huge number of PhD hours and petaFLOPS of computational power – you’ve created a brain that can do one thing; learn about chess games. Yes, it is a brain without any knowledge in it – a tabula rasa – but it is a brain that is absolutely useless if provided knowledge about anything other than playing chess.

It’s still a fabulous achievement, no doubt. It is also research that is going to be useful to any number of AI learning projects going forward. But what it isn’t is any kind of demonstration that computers can out-perform people (or even mice, for that matter) in generic learning applications. It isn’t a demonstration that neural nets are being advanced into the area of general learning. This is not an Artificial Intelligence that that could be, essentially, self-teaching and therefore life-like in terms of its capabilities.

And, just to let the press know, it isn’t the end of the world.

The Nature of my Game

I watched with glee while your kings and queens
fought for ten decades for the gods they made.

On October 19th, 1453 the French army entered Bordeaux. The province of Gascony, with Bordeaux as its capital, had been a part of England since the marriage, in 1152, of Eleanor of Aquitaine to the soon-to-be King Henry II of England. Between 1429 and 1450, the Hundred Years War had seen a reversal of English fortunes and a series of French victories which began under the leadership of Jeanne D’Arc and culminated in the French conquest and subsequent control of Normandy.

Following the French victory in Normandy, a 3 year struggle for the control of Gascony began. As the fight went on, it saw dominance shift from the English to the French – and then back again after the arrival of John Talbot, earl of Shrewsbury. Eventually Talbot succumbed to numbers and politics, leading his army against a superior French position at Castillon on July 17th of 1453. He and his son both paid with their lives and, with the defeat of the English army and the death of its leadership, the fall of all of Gascony and Bordeaux inevitably followed before the year’s end.

The Battle of Castillon is generally cited as the end of the Hundred Years War, although the milestone is considerably more obvious in retrospect than it was at the time. The English defeat did not result in a great treaty or the submission of one king to another. England simply no longer had control of her French territories, save for Calais, and in fact never would again. Meanwhile, the world was moving on to other conflicts. The end of the Hundred Years War, in turn, is often cited as a key marker for the end of the (Late) Medieval period and the transition to the (Early) Modern period. Another major event of 1453, the Fall of Constantinople, is also a touchstone for delineating the transition to the modern world. Whereas the Hundred Years War marked a shift from the fragmented control by feudal fiefdoms to ever-more centralized nation states, the Fall of Constantinople buried the remains of the Roman Empire. In doing so, it saw the flight of Byzantine scholars from the now-Ottoman Empire to the west, and particularly to Italy. This produced a symbolic shift of the presumptive heir of the Roman/Greek foundations of Western Civilization to be centered, once again, on Rome.

The term “Renaissance” refers to the revival of classical Greek and Roman thought – particularly in art, but also in scholarship and civics. Renaissance scholarship held as a goal an educated citizenry with the skills of oration and writing sufficient to positively engage in public life. Concurrent to the strides in the areas of art and architecture, which exemplify the period, were revolutions in politics, science, and the economy. The combination of the creation of a middle class, through the availability clerical work, and the emphasis on the value of the individual, helped drive the nail into the coffin of Feudalism.

The designer for the boardgame Pax Renaissance references 1460 as the “start date” for the game, which lasts through that transitional period (roughly 70 years). Inherent in the design of the game, and expounded upon in the manual, is the idea that what drove the advances in art, science, technology, and government was transition to a market economy. That transition shifted power away from the anointed nobility and transferred it to the “middle class.” The game’s players immerse themselves in the huge changes that took place during this time. They take sides in the three-way war of religion, with Islam, Catholicism and the forces of the reformation fighting for men’s souls. The game simulates the transition of Europe from feudalism to modern government; either the nation state empires or the republic. Players also can shift the major trade routes, refocusing wealth and power from the Mediterranean to northwestern Europe.

Technically speaking, I suppose Pax Renaissance is not a boardgame, because there is no board. It is a card game, although it stretches that definition as well. Some of the game’s cards are placed on the table to form a mapboard of Europe and the Mediterranean, while others serve a more straight-forward “card” function – set down from the players’ hand onto the table in front of them. The game also has tokens that are deployed to the cards and then moved from card to card (or perhaps to the space between cards.). While this starts to resemble the more typical board game, if you continue to see the game in terms of the cards, the tokens can be interpreted to indicate additional states beyond the usual up-or-down (and occasionally rotated-sideways) that cards convey.

Thinking about it this way, one might imagine that it drew some of its inspiration a card game like Rummy – at least the way I learned to play Rummy. In that variant, players may draw from the discard pile, with deeper selections into the pile having an increased cost (of holding potentially negative-point cards in your hand). Once collected, cards remain in the hand or are played in front of the player. Of course, this doesn’t really map one-to-one. A unique point in Pax Renaissance is that there are no secret (and necessarily random) draws directly to the players’ hands. Instead, new cards are dealt into the “market,” the openly-visible and available pile, usually to a position that is too expensive to be accessed immediately, giving all players visibility several turns in advance.

Thus the game has no random component, assuming that one allows that the differing deck order (and content – not every card is used in every game) could be thought of as different “boards” as opposed to a random factor. So rather than a checkers or a chess with its square grid, it is a version where there are many 1000s of variations in the board shape and initial setup. Stretching the analogy to its breaking point, that variable board may have also a “fog of war,” as the playing space is slowly revealed over the course of the game.

I don’t actually mean to equate Pax Renaissance with Rummy or chess, but rather to establish some analogies that would be useful when trying to develop a programmed opponent. The game is the third in a “Pax” series from the designer, and can easily be seen as a refinement to that system. Theme-wise, it is a follow-on to his game Lords of the Renaissance from 20 years earlier. That title is a far more traditional map-and-counter game on the same subject, for 12 (!!!) players.

However, I’d like to look at this from an AI standpoint, and so I’ll use the comparison to checkers.

Since the “board” is revealed to all players equally (albeit incrementally) there is no hidden knowledge among players. Aside from strategy, what one player knows they all know. Given that factor, one supposes that victory must go to the player who can think more moves ahead than their opponents can.

I recently read an article about the development of a checkers artificial intelligence. The programmer in this tale took on checkers after his desire to build a chess intelligence was made obsolete by the Deep Blue development efforts in the early 1990s. It was suggested to him that he move to checkers and he quickly developed a top-level player in the form of a computer algorithm. His solution was to attack the problem from both sides. He programed the end-game, storing every possible combination of the remaining pieces and the path to victory from there. He also programmed a more traditional look-ahead algorithm, starting from a full (or nearly so) board and analyzing all the permutations forward to pick the best next move. Ultimately, his two algorithms met in the middle, creating a system that could fully comprehend every possible move in the game of checkers.

Checkers, as a target game for the development of AI, had two great advantages. First, it is a relatively simple game. While competitive, world-class play obviously has great depth, most consider a game of checkers to be fairly easy and casual. The board is small (half the spaces, functionally speaking, as chess) and the rules are very simple. There are typically only a handful of valid moves given any checkers board setup, versus dozens of valid moves in chess. Secondly, the number of players is large (who doesn’t know how to play), and thus the knowledge about what strategies to use is known, even if not quite as well-analyzed as with Chess. Thus, in a checkers game an AI can begin its work by using a “book.” That is, it uses a database of all of the common and winning strategies and their corresponding counter-strategies. If a game begins by following the path of an already-known game, the programmed AI can proceed down that set of moves.

At least until one player decides its fruitful to deviate from that path.

After that, in the middle part of the game, a brute force search can come into play. Note that this applies to a programmed opponent only until the game is “solved”, as described in the article. Once the database has every winning solution from start to end, a search over the combinations of potential moves isn’t necessary. But when it is used, the AI searches all combinations of moves from the current position, selecting its best current turn move based on what the (human) opponent is likely to do. At its most basic, this problem is often considered with a minimax algorithm. This is an algorithm that makes an assumption that, whatever move you (thinking of yourself as the algorithm) make, your opponent will counter with the move least advantageous to you. Therefore, to find the best move for yourself, you alternately search for the best move you can make and then the worst move your opponent can make (the minimum and then maximum ranked choices) to determine the end state for any current turn move.

Wikipedia has a description, with animated example, of how such a search works using a technique to avoid searching fruitless branches of the tree. That inspired me to take a look at Pax Renaissance and do a similar evaluation of the choices one has to make in that game.

 

A smallish example of an animated game tree.

I’m following the color coding of the Wikipedia example, although in the above screenshot it’s not as clear as it should be. First of all, not everything is working correctly. Second, I took a screen shot while it is actively animating. The coloring of the nodes is done along with the calculations and, as the tree is expanded and/or pruned, the existing branches are shifted around to try to make things legible. It looked pretty cool when it was animating. Not quite so cool to watch once I upped the number of nodes by a factor of ten or so from what is displayed in the above diagram.

I’m assuming a two-player game. The actual Pax Renaissance is for 2-4 players, but initially I wanted to try to be as much like the “textbook” example as I could. The coloring is red for a pruned/unused branch and yellow for an active or best branch. The cyan block is the one actively being calculated, and the blue means a block that his been “visited,” but it has not yet completed evaluation. The numbers in each block are the best/worst heuristic at the leaf of each branch, which is four plies down (two computer turns and two opponent turns). Since at each layer the active player is assumed to choose the best move for them, the value in a circle should be the lowest value of any square children and the square’s should the highest value of any circular children.

The value is computed by a heuristic, potentially presenting its own set of problems. On one hand, the heuristic is a comparison between the two players. So if the computer has more money, then the heuristic comes out positive. If the opponent has more money, then the heuristic comes out negative, with that value being the difference between the two players’ bank accounts. In that sense, it is much easier than, say, positional elements on the chess board, because each evaluation is symmetrical. The hard part is comparing the apples to the oranges. A determination is needed much like the “points” assigned to pieces in chess. Beginning chess players learn that a rook with worth 5 pawns. But how much is a “Coronation Card” worth in florins? Perfecting a search algorithm means both getting the algorithm working and implementing that “domain knowledge,” the smarts about the balance among components, within the mathematical formulas of the search.

As I said, this was an early and simple example.  To build this tree, I assumed that both players are going start the game being frugal in their spending, and therefore use their first turn to buy the cheapest two cards. A the turns advance, they look at the combinations of playing those cards and buying more. Even in this simple example, I get something like 4000 possible solutions. In a later attempt (as I said, it starts looking pretty cluttered), I added some more game options and produced a tree of 30,000 different results. Remember, this is still only two turns and, even within those two turns, it is still a subset of moves. Similar to chess and checkers, as the initial moves are complete, the number of possibilities grows as the board develops.

At this point, I need to continue building more complete trees and see how well and efficiently they can be used to determine competitive play for this game. I’ll let you know if I find anything.

Cold War Chess

On May 16th, 1956, the newly constituted Republic of Egypt under the rule of Gamal Abdel Nasser recognized the communist People’s Republic of China.

Egypt had broken from British rule in 1952 with the Free Officers Movement and their coup which ended the Egyptian monarchy. The influence of the military, and particularly Nasser, shifted to more involvement in the political. Nasser and the other officers ruled through a Revolutionary Command Council and, over the next few years, eliminated political opposition. Nasser became chairman of he Revolutionary Command Council and by 1954 was largely himself ruled Egypt.

In the run up to the 1952 coup, Nasser had cultivated contacts with the CIA. His purpose was to provide a counter balance to the British, should they attempt to oppose the Free Officers in their takeover. The U.S. came to see Nasser as an improvement over the deposed King Farouk and looked to his support in the fight against communism. Nasser himself promoted pan-Arab nationalism which concerned itself largely with the perceived threat from the newly-formed State of Israel. Nasser also became a leader of the newly-independent third world countries, helping create the policy of “neutralism,” having the rising powers of the third world remain unaligned in the Cold War.

It was within this context that the recognition of China appeared to be so provocative.

Egypt had begun drifting towards the communist camp due to a frustration with terms of arms sales and military support from the Western powers. A major weapons deal with the USSR to purchase Czechoslovakian weapons in 1955 greatly enhanced Egypt’s profile in the region, and put them on an even military setting with Israel.

When Nasser recognized China, the response from the U.S. was a counter punch; withdrawing financial support for the Aswan Dam project, itself conceived as a mechanism for securing Egypt’s support on the anti-communist side of the Cold War. U.S. officials considered it a win-win. Either they would bend Nasser to their will, and achieve better compliance in the future, or he would be forced to go to the Soviets to complete the Aswan Dam. They figured that such a project was beyond the financial capabilities of the Russians, and the strain would hamper the Soviet economic and military capabilities enough to more than make up for the deteriorated relations with Egypt. In that event, the ultimate failure of the project would likely realign Egypt with the U.S. anyway.

Egypt’s response continued to surprise. Despite having negotiated that the UK turn over control of the Suez Canal to Egypt, on July 26th, 1956, Nasser announced the nationalization of the Suez Canal and used the military to expel the British and seize control over its operation.

Known Bugs – Arab Israeli Wars

Arab Israeli Wars

Version 0.1.3.0. (April 22nd 2017)

  1. If you move a vehicle, and still have additional transfers left, but don’t want to use them, the system may wait forever for you to make another transfer. This doesn’t always happen, but if it does, moving a unit around within the same front will usually result in the prompt.
  2. If multiple informational displays are show simultaneously and overlapped, the hide button may show through the upper card from the lower card.
  3. If the card Flanking Maneuvers (+2 to all Israeli units in Target front) is applied to Armored Cars, the unit icon only shows a force value of 4*. It should display 5*. The correct value is used in the calculations.

Ain’t she a beautiful sight?

There was armored cars, and tanks, and jeeps,
and rigs of every size.

Twenty-eight years after the Jerusalem riots saw the beginning of Operation Nachshon. The Operation was named for the Biblical prince Nachshon, who himself received the name (meaning daring, but it also sounds similar to the word for “stormy sea waves”) during the Israelites exodus from Egypt. According to one text, when the Israelites first reached the Red Sea, the waters did not part before them. As the people argued on the sea’s banks about whom would lead them forward, Nahshon entered the waters. Once he was up to his nose in the water, the sea parted.

Operation Nachshon was conceived to open a path between Tel Aviv and Jerusalem to deliver supplies and ammunition to a besieged Jerusalem, cut off from the coast as the British withdrew from Palestine. The road to Jerusalem led through land surrounded by Arab controlled villages, from which Palestinian militia (under the command of Abd al-Qadir al-Husayni) could ambush Israeli convoys attempting to traverse the route.

The operation started on April 5th with attacks on Arab position and, in the pre-dawn hours on April 6th a convoy arrived in Jerusalem from Tel-Aviv. During the operation, the Israelis successfully captured or reduced more than a dozen villages, and took control of the route. Several more convoys made it into Jerusalem before the end of the operation on April 20th.

Operation Nachshon was also the first time Jewish forces attempted to take and hold territory, as opposed to just conducting raids.

Today also marks a first for A Plague of Frogs. We are delivering, for free download, a PC game depicting the Arab Israeli War of 1948. Click for rules, download link, and other details.

 

They Give Me Five Years. Five Years

I hope you do what you said when you swore you’d make it better.

A great irony is that when a people finally throws of the tyranny of a ruling empire, they so often find that it was their imperial masters that had been keeping them from killing each other.

By the time the Ottoman Empire was broken apart, it had long been seen as a system in decline. After their defeat in the Battle of Vienna in 1683, the Empire no longer threatened Europe with its expansion. After the loss of the Russio-Turkish War in 1774, the European powers saw the ultimate breakup of the Ottoman Empire as an inevitability, and began jockeying for control over the eventual spoils. In the mid-1800s, the term The Sick Man of Europe was coined to describe the Ottoman Empire. Compared to its counterparts in the West, it had lower wealth and a lower quality of life. Non-Muslims were accorded a second-class citizenship status but, even within this system, non-Muslims and particularly Christians were better educated and thus developed an economic gap relative to the Muslim majority.

As the Empire continued to decline, nationalist independence movements caused internal stress. Where armed conflict ensued, one might wonder whether my thesis applies. In the Levant, however, despite a multi-cultural population as well as a rising sense of Arab-nationalism independent from Turkey, there was relative peace. Movements for more autonomy tended to focus their efforts in the political arena rather than through violence. This was the period where the Zionism movement was taking form, but it too expressed itself mostly within the confines of civil government.

The final nail in the Ottoman coffin came from backing the Germans in the First World War. In the Middle East, the British had since 1882 occupied Egypt despite it technically remaining a province of the Ottoman Empire. Egypt became a focus of the British war effort early on, both as a base of operations for the Gallipoli campaign as well as to protect the Suez Canal. Eventually, the British took to the offensive in the Sinai and then Gaza, as a way to provide additional pressure on the Ottomans.

In 1917, the British army captured, from the Turks, Jerusalem and the lands that were to become the modern state of Israel. At the end of the war, occupation of the Levant portion of the Middle East was formalized by the Treaty of Versailles. The rule of London replaced the rule of Constantinople.

While the Arab portions of the Ottoman empire were not immune to nationalistic movements, pre-WWI Arabs under the Turks tended to see themselves as part of a Muslim nation. The advent of WWI and centralization of power in Constantinople, following a January 1913 Ottoman coup d’état, resulted in the Sharif and Emir of Mecca declaring an Arab Revolt in June of 1916. It bears considering that this revolt came after the British were at war with the Ottoman Empire. While many reasons were given for the Revolt, including Arab Nationalism and a lack of Muslim piety on the part of the Committee of Union and Progress (the party of the Young Turks and the Three Pashas installed of the aforementioned coup), Hussein ibn Ali al-Hashimi had made agreements with the British in response to their request for assistance in fighting the Central Powers.

Such understandings contributed to Arab unrest post-WWI, as pre-war promises of Arab Independence differed from the disposition of captured Ottoman territory after the war. It didn’t help with Arab sentiment that Britain, now in possession and control of Palestine, had issued the Balfour Declaration in 1917, which supported the concept of a Jewish Homeland in Palestine. While modern Zionism had been an issue for decades, under Ottoman rule it was largely relegated to the political sphere. With the end of the supremacy of a Muslim power in Palestine, Arabs likely felt a more direct protest was necessary to assert their position in Palestine. Arab nationalism was also reinforced by anti-French sentiment in Syria, brought to a head by the March 7, 1920 declaration of Faisal I (son of Hussein bin Ali and a General in the Arab Revolt of 1916) as King.

Events of early 1920, and a lack of response from the ruling British Authorities, caused Jewish leaders to look to their own defense. By the end of March militia groups had trained something like 600 paramilitaries and had begun stockpiling weapons.

Jerusalem Riots

Sunday morning, April 4th 1920 found Jerusalem in a precarious state. Jewish visitors were in the city for the Passover celebration. Christians were there for Easter Sunday. Additionally, the Muslim festival of Nebi Musa had begun on Good Friday, to last for seven days. In excess of 60,000 Arabs were in the streets for the festival, and by mid-morning there was anti-Jewish violence occurring sporadically throughout the Old City. Arab luminaries delivered speeches to the masses, wherein they advocated for Palestinian independence and the expulsion, by violence if need be, of the Zionists among them. By mid-day, the violence had turned to riots, with homes, businesses, and temples being vandalized and as many as 160 Jews injured.

The British military declared, first a curfew, and then martial law, but the riots continued for four days.  Ze’ev Jabotinsky, a co-founder of the Jewish Legion, along with 200 volunteers tried to work with the British to provide for the defense of the Jewish population. The British ultimately prevented such assistance and, in fact, arrested 19 Jews, including Jabotinsky, for the possession of arms. Jabotinsky was sentenced to 15 years in prison, although his sentence was eventually reduced, along with all of those (Jews and Arabs) convicted as a result of the riots. The total number put on trial was approximately 200, with 39 of them being Jews.

By the time peace was restored to Jerusalem, five Jews and four Arabs were dead. Over 200 Jews were injured, eighteen of them critically and 300 Jews were evacuated from the Old City. Some 23 Arabs were also injured, one critically.

The aftermath of the riots left the British occupiers on everyone’s wrong side.

Among the Arabs, the feeling was that they had been wronged by the lack of independence after being separated from the Ottoman Empire. Furthermore, in the Balfour Declaration they saw that ultimately the British would replace their own rule with a Jewish one. The riots also were the beginnings of a unique Palestinian nationalism, separate from Pan Arabism or the Syrian independence movements.

On the other hand, the Jews suspected British complicity as a cause of the riots in the first place. In addition to some unproven conspiracies, the British had several missteps which allowed the riots to escalate. For example, Arabs arrested during Sunday nights curfew were released on Monday morning, only to see the riots continue through Wednesday. The British halted Jewish immigration to Palestine, punishing the Jews for Arab aggression. The inadequacy of Britain’s defense of the Jewish population lead directly to an organized Jewish defense force called the Haganah (“defense”), which would later become the core of the Israeli military.

The incident surely tipped-off the United Kingdom that she had entered into a situation from which there was no easy way out. Nevertheless, for the next few decades she persevered in bringing enlightened British rule to a difficult region.

It would take more than 19 years before the British partially walked back the Balfour Declaration by halting Jewish Immigration to Palestine. It would be almost 27 years, in February of 1947, before British parliament voted to terminate the Palestinian Mandate and hand the issue over the the United Nations.

 

United We Fruit

Makin’ up a mess of fun,
makin’ up a mess of fun
Lots of fun for everyone
Tra la la, la la la la
Tra la la, la la la la

On March 15th, 1951, Colonel Jacobo Árbenz Guzmán was inaugurated as President of Guatemala. Alas, he found his new socialist policies earned him the ire of the United Fruit Company.

While the CIA had participated in regime change before, and while the U.S. had previously meddled in the Caribbean, this was the first exercise of the Cold War performed in America’s own near abroad. It was the start of decades of Cold War confrontation barely a stone’s throw from American soil. It would continue through, and perhaps culminate in, the early 1980s with the region embroiled in a long term conflict and with ramifications, like the Iran-Contra affair, that seriously shook up the U.S. government.

Fortunately for the world, much of this is quickly becoming ancient history. A 1987 peace agreement began to move the region back towards normalcy and what problems still exist are no where near the level of 3-4 decades ago. If we nonetheless want to relive those wild and crazy times, we might do so through a game called “Latin Intervention.”

Fun for Everyone

Latin Intervention is a one-page, print-and-play game freely available from Board Game Geek (and elsewhere.) As you might expect given that introduction, it is very simple. Players assume the role of the two superpowers and take turns placing pieces on the board. The combination of placed markers and a die roll determines the political alignment for the nations of Central America. A player wins by controlling five out of the seven Central American countries. One catch is that unit placement drives up a “Threat Meter,” representing world tensions. This restricts each players actions, as driving the threat meter over the top will result in losing the game.

Despite the simplicity of the mechanics, the game has real appeal due to its “color.” Pieces are labeled to represent the various means that the superpowers used to meddle in the affairs of third world countries: secret agents, monetary aid, revolutionaries, and the military. All of this was done through proxies, so as to not push the world over the edge into a direct conflict between the U.S. and the U.S.S.R. Another Board Game Geek user has redone the game art, making the print and play product look, actually, quite fetching.

Sadly, though, in playing through, the game is not quite right.

Critiques and Criticisms

There isn’t a whole lot out there on the internet written about this game. It’s simple, it’s free, and it’s probably not for everyone. For those that have expressed an opinion, there are positive comments, but also a few consistent criticisms. Please take a look at the game and the rules if you want to follow along with my narrative.

First off, there is a lot of confusion on the Threat Meter track. In the original game design, the threat track has both green and red steps and it isn’t entirely clear in the rules how they are to interact. The consensus is that they are together a single set of steps and each red step is merely two green steps. The new board has eliminated the “red” altogether, and simply has eight green steps, with counters being worth either one or two steps. One does wonder, given some of the other issues, whether we are missing something here, but I don’t see any other way to interpret this. For example, if the tracks were actually in parallel (that is, the red and green steps were separate), the “red” markers would be effectively free as there are only four of them in the game. That wouldn’t make sense at all.

One realization that I made quickly is that, with the eight step threat track, players must try to put the maximum threats onto the board as quickly as possible. In fact, the order seems pretty much proscribed. The Soviets play 1. Missile Base 2. Revolutionaries and 3. KGB agent. U.S must play 1. Carrier Group 2. CIA agent. At that point, the threat level is at maximum, and no more units can be placed. From this point on, no player can reduce the threat level because the other side would immediate use that to place another piece. So the game must played out with six pieces on the board (the U.S. has Panama Canal to start). This seems like a poor use of the game, as it ignores the bulk of the available pieces.

The other area of agreement is that the Missile Base and Carrier Group, the two +5 units in the game, are overpowered. Because control is gained on a roll of “6 or higher,” these two units are essentially instant wins. I haven’t analyzed too carefully, but I’d think the game would probably see the Russians keeping their Agent and Revolutionaries together (for a +5) while moving the Missile Agreement to capture territory. The U.S. could challenge neither (as both sides would have automatic sixes), and would always have to move to protect his other two pieces if the Russians went after them. Like the Soviets, he probably has to group the two in Panama to prevent being taken out. Maybe I am missing something, but the win would have to come from taking a risk that you could neutralize your opponents +5 piece with a lesser piece by a couple of lucky rolls in a row, allowing you to pick up territory.

Strangely, with all of that, there are several players who talk about what a great game it is to play.

If you’re looking at the Board Game Geek site, there is a video review of the game by YouTuber marcowargamer. He also identified those two major areas of problems within the rules. He explains one workaround that seems to be used, and that is to have separate threat tracks for both players. Thus, you can attempt to, judiciously, lower the threat meter, giving up initiative in the current turn for more power during a subsequent turn. He also proposes a rule for making the +5 makers single-use, to prevent them from completely overpowering the game.

He doesn’t mention it specifically, but he has also made a change where challenged countries are re-rolled every turn, not just after the placement of a new marker.

From the video review, he is not indicating whether his modified rules are play-tested and found to be balanced. He is more interested in the game as a launch point for discussions about history. The changes, and particularly the separate Threat Meters, open up a number of different strategies. However, it seems to me that the common threat track is key to the historical perspective of the game. Like the similar mechanic in Twilight Struggle, it captures the feel of the Cold War arms race. You may not want to escalate yourself, but you can’t let those Russkies develop a missile gap.

I’ve come up with my own variant that addresses the balance issues while preserving the single threat track. Thinking about it, it may just be complicating the rules while achieving the same results. On the other hand, I think these rules fit better with the historical “color,” which may justify the complexity.  I’ve posted my rules, so you can see for yourself.

The Rules

They are summarized on this page. I will note, that I will make changes at the link if I discover problems, so at some point the rules are likely to get out of sync with my commentary.

There are two major changes. I address the Threat Meter issue by making deployments to already-controlled countries “free” in terms of threat. Sending aid to an anti-government faction may be seen as threatening on the international level, but sending aid to a friendly government probably wouldn’t be. This essentially accomplishes the same thing as the separate tracks – a player can either play an existing piece now, or gain a new piece for play in the future.

For the +5 units, I assign a threat penalty for leaving them on the board. A Cuban Missile Base or a Carrier Group hovering off of Nicaragua would be seen as a continuing threat. Thus, you can deploy your (for example) carrier for “free”, but only have so many turns to use it before you have to pull it off map. Furthermore, in doing so you probably lower the threat level, opening up opportunities for your opponent. It it likely that this not only makes these units the equivalent of “one time” plays, but also demands that they be used at the beginning of the scenario, when the threat level can accommodate it.

I made the choice to allocate the threat points from the +5 units at the end, rather than during placement. This means if you are going first in the turn, use of a +5 counter might be an instant loss if your opponent can drive the threat meter up to the last position. It further weakens the play of the most powerful pieces.

The second major change I made was to restrict on-board movement. Movement for some pieces is restricted to adjacent countries. The “Aid” markers, otherwise the weakest of units, can be moved without restriction. This further shakes up the balance, as well as creates some strategic value for the map. The map is no longer just seven bins, into which you can place pieces. The layout of the countries matters, and it creates strategic value to hold some countries over others. It also makes some “real life” sense. It’s easy enough to send suitcases full of money anywhere in the world. But to actually move a couple of brigades of revolutionary armies, that might take controlling the ground that you are required to pass through.

One other change I made was to vary the player order. This helps create some back and forth in that, once you start winning the game, you are now disadvantaged by having to take the first turn. It may also throw the game into imbalance. The Soviets have better pieces, and this might make it so they can’t lose. To balance this out, I’ve tweaked the restrictions on the Aircraft Carrier allowing it to be placed directly into a contested country in response to the Soviet’s use of missiles. In terms of that color, projecting military power must be a lot easier for the Americans, who are a) so close to begin with and b) have the naval assets available. But in terms of game play, it gives the U.S. player a counter-strategy to the Soviet’s ability to grab an early lead. With the finite number of threat steps, it may be that this move remains merely a possibility. It all could use some play-testing to see if things are balanced.

So there it is. I’ve not done much checking for balance, and anyone who has a chance to do so before I do will have their comments welcomed.

They call it The Dance

So you think you know what’s going on inside her head

On June 24th, in 1354, the largest outbreak of Choreomania occurred in Aachen, Germany.It subsequently spread to other cities in Germany, the low countries, and Italy.

This phenomenon has been called, variously, Dancing Mania, Dancing Plague, and St. Vitus’ Dance. At the time, the cause was attributed to a curse sent by St. John the Baptist or St. Vitus, due to correlations between the outbreaks and the June feast days of those saints. Much later, the evolution of medical science diagnosed St. Vitus’ Dance as Sydenham’s chorea, an involuntary jerking of the hands, feet and face.

The mass phenomenon of the middle ages, however, is more often considered a social affliction rather than a medical one. The outbreaks are described as affecting up to tens of thousands of people at a time, making contagions or similar causes (such as spider bites) an improbable source.

The Aachen outbreak and other large outbreaks of the Dancing Plague occurred during times of economic hardship. This has suggested one medical cause, a hallucinogenic effect of a grain fungus that can spread with flooding and damp periods.

The affliction was said to be deadly, with the only cure being the playing of the right music.

Similarly, I have been trying to sooth the violent convulsions in this morning’s financial markets by playing selected songs from less troubled times. Feel free to join me.

Mayday, Mayday, Mayday!

“If there’s a bustle in your hedgerow, don’t be alarmed now.

In 1927, the term Mayday was adopted as a spoken equivalent of the Morse Code SOS signal. The term itself is an Anglicization of the French phrase m’aider (to aid me or to help me), itself a shortened version of the phrase venez m’aider (come to help me).

Also in 1927, the First of May was proposed as a celebration of the native culture of the Hawaiian Islands. It is known as May Day or Lei Day. The holiday is intended to be non-political, non-partisan, and non-religious.

This is in contrast to the significance of the date in much of Europe. International Workers’ Day was established as a commemoration of the Haymarket Riot. A labor strike was called on May 1st, 1886 in Chicago, IL to agitate for the establishment of an eight-hour work day. The strike turned violent on May 3rd, with the police firing on striking workers who were attacking replacement workers at the site of a lock-out. Between two and six workers were reportedly killed.

A flyer was printed by an anarchist group, calling the striking workers to a mass meeting as well as calling them “to arms.” The meeting, on the night of May 4th, lasted for several hours before the police moved in and ordered the crowd to disperse. As the police approached the crowd, and unknown person threw a bomb into the path of the advancing police, killing one officer instantly and mortally wounding six others. There was a firefight. At least four workers were killed, and sixty officers wounded as well as fifty or more strikers. The public opinion turned against the labor movement and ultimately a number of anarchists were executed on charges relating to the incident. The unions, however, suspected infiltrators were responsible for bombing so as to discredit the movement.

In 1890, the First of May was declared to be International Worker’s Day in an effort to unite Socialists, call attention to the eight-hour work day movement, and memorialize the (labor) victims of the Haymarket incident. Riots occurred in Cleveland in 1894 and 1919. It was not until 1978 when May Day (observed on the first Monday in May) became a labour holiday in the United Kingdom. In 2000, May Day riots resulted in (among other incidents) the destruction of a McDonald’s Restaurant on The Strand in London.

This has created a modern nexus with the traditional Anglo-Saxon holiday celebrating the coming of Spring and fertility. Modern celebrators connect the socialist roots where May Day equates to Labor Day with the pagan/earth/new age-y roots of the pagan fertility festivals.

Here at A Plague of Frogs Studios, we have the day off because it is Sunday. No political, partisan or religious connotations intended.