Related

In the fall of 1915, after ten years of analysis, Albert Einstein presented his gravitational field equations of general relativity in a series of lectures at the Royal Prussian Academy of Sciences. The final lecture was delivered on November 25th, 104 years ago.

Yet it wasn’t until a month or so ago that I got a bug up my butt about general relativity. I was focused on some of the paradox-like results of the special theory of relativity and was given to understand, without actually understanding, that the general theory of relativity would solve them. Not to dwell in detail on my own psychological shortcomings, but I was starting to obsess about the matter a bit.

Merciful it was that I came across The Perfect Theory: A Century Of Geniuses And The Battle Over General Relativity when I did. In its prologue, author Pedro G. Ferreira explains how he himself (and others he knows in the field) can get bitten by the Einstein bug and how one can feel compelled to spend the remainder of one’s life investigating and exploring general relativity. His book explains the allure and the promise of ongoing research into the fundamental nature of the universe.

The Perfect Theory tells its story through the personalities who formulated, defended, and/or opposed the various theories, starting with Einstein’s work on general relativity. Einstein’s conception of special relativity came, for the most part, while sitting at his desk during his day job and performing thought experiments. He was dismissive of mathematics, colorfully explaining “[O]nce you start calculating you shit yourself up before you know it” and more eloquently dubbing the math “superfluous erudition.” His special relativity was incomplete in that it excluded the effects of gravity and acceleration. Groundbreaking though his formulation of special relativity was, he felt there had to be more to it. Further thought experiments told him that the gravity and acceleration were related (perhaps even identical) but his intuition failed to close the gap between what he felt had to be true and what worked. The solution came from representing space and time as a non-Euclidean continuum, a very complex mathematical proposition. The equations are a thing of beauty but also are beyond the mathematical capabilities of most of us. They have also been incredibly capable of predicting physical phenomena that even Albert Einstein himself didn’t think were possible.

From Einstein, the book walks us through the ensuing century looking at the greatest minds who worked with the implications of Einstein’s field equations. The Perfect Theory reads much like a techno-thriller as it setts up and then resolves conflicts within the scientific world. The science and math themselves obviously play a role and Ferreira has a gift of explaining concepts at an elementary level without trivializing them.

Stephen Hawking famously was told that every formula he included in A Brief History of Time would cut his sales in half. Hawking compromised by including only Einstein’s most famous formula, E = mc2. Ferreira does Hawking one better, including only the notation, not the full formula, of the Einstein Tensor in an elaboration on Richard Feynman’s story about efforts to find a cab to a Relativity conference as told in Surely You’re Joking, Mr. Feynman. The left side of that equation can be written as, Gμν. This is included, not in an attempt to use the mathematics to explain the theory, but to illustrate Feynman’s punch line. Feynman described fellow relativity-conference goers as people “with heads in the air, not reacting to the environment, and mumbling things like gee-mu-nu gee-mu-nu”. Thus, the world of relativity enthusiasts is succinctly summarized.

The most tantalizing tidbit in The Perfect Theory is offered up in the prologue and then returned to at the end. Ferreira predicts that this century will be the century of general relativity, in the same way the last century was dominated by quantum theory. It is his belief we are on the verge of major new discoveries about the nature of gravity and that some of these discoveries will fundamentally change how we look at and interact with the universe. Some additional enthusiasm shines through in his epilogue where he notes the process of identifying and debunking a measurement of gravitational waves that occurred around the time the book was published.

By the end of the book, his exposition begins to lean toward the personal. Ferreira has an academic interest in modified theories of gravity, a focus that is outside the mainstream. He references, as he has elsewhere in the book, the systematic hostility toward unpopular theories and unpopular researchers. In some cases, this resistance means a non-mainstream researcher will be unable to get published or unable to get funding. In the case of modified gravity, he hints that this niche field potentially threatens the livelihood of physicists who have built their careers on Einstein’s theory of gravity. In fact, it wasn’t so long ago that certain aspects of Einstein’s theory were themselves shunned by academia. As a case in point, the term “Big Bang” was actually coined as a pejorative for an idea that, while mathematically sound, was too absurd to be taken as serious science. Today, we recognize it as a factual and scientific description of the origin of our universe. Ferreira shows us a disturbing facet of the machinery that determines what we, as a society and a culture, understand as fundamental truth. I’m quite sure this bias isn’t restricted to his field. In fact, my guess would be that other, more openly-politicized fields exhibit this trend to an even greater degree.

Ferreira’s optimism is infectious. In my personal opinion, if there is to be an explosion of science it may come from a different direction that what Ferreira implies. One of his anecdotes involves the decision of the United States to defund the Laser Interferometer Space Antenna (LISA), a multi-billion dollar project to use a trio of satellites to measure gravitational waves. To the LISA advocates, we could be buying a “gravitational telescope,” as revolutionary in terms of current technologies as radiotelescopy was to optical telescopes. The ability to see further away and farther back in time would then produce new insights into the origins of the universe. But will the taxpayer spend billions on such a thing? Should he?

Rather than in the abstract, I’d say the key to the impending relativity revolution is found in Ferreira’s own description of the quantum revolution of the past century. It was the engineering applications of quantum theory, primarily to the development of atomic weapons, that brought to it much of the initial focus of interest and funding. By the end of the century, new and practical applications for quantum technology were well within our grasp. My belief is that a true, um, quantum leap forward in general relativity will come from the promise of practical benefit rather than fundamental research.

In one of the last chapters, Ferreira mentions that he has two textbooks on relativity in his office. In part, he is making a point about a changing-of-the-guard in both relativity science and scientists, but I assume he also keeps them because they are informative. I’ve ordered one and perhaps I can return to my philosophical meanderings once I’m capable of doing some simple math. Before I found The Perfect Theory, I had been searching online for a layman’s tutorial on relativity. Among my various meanderings, I stumbled across a simple assertion; one that seems plausible although I don’t know if it really has any merit. The statement was something to the effect that there is no “gravitational force.” An object whose velocity vector is bent (accelerated) by gravitational effects is, in fact, simply traveling a straight line within the curvature of timespace. If I could smarten myself up to the point where I could determine the legitimacy of such a statement, I think I could call that an accomplishment.

The not-so-Friendly Skies

This past weekend, the Wall St. Journal published a front-page article detailing the investigation into the recent, deadly crashes of Boeing 737 MAX aircraft. It is a pretty extensive combination of information that I had seen before, new insights, and interviews with insiders. Cutting to the chase, they placed a large chunk of the blame on a regulatory structure that puts too much weight shoulders of the pilots. They showed, with a timeline, how many conflicting alarms the pilots received within a four-second period. If the pilots could have figured out the problem in those four seconds and taken the proscribed action, they could have saved the plane. The fact that the pilots had a procedure that they should have followed means the system fits within the safety guidelines for aircraft systems design.

Reading the article, I couldn’t help but think of another article that I read a few months back. I was directed to the older article by a friend, a software professional, on social media. His link was to an IEEE article that is now locked behind their members-only portal. The IEEE article, however, was a version of a blog post by the author and that original post remains available on Medium.

This detailed analysis is even longer than the newspaper version, but also very informative. Like the Wall St. Journal, the blog post traces the history behind the design of the hardware and software systems that went into the MAX’s upgrade. Informed speculation describes how the systems of those aircraft caused the crash and, furthermore, how those systems came to be in the first place. As long as it is, I found it well worth the time to read in its entirety.

On my friend’s social media share, there was a comment to the effect that software developers should understand the underlying systems for which they are writing software. My immediate reaction was a “no,” and its that reaction I want to talk about here. I’ll also point out that Mr. Travis, the blog-post author and a programmer, is not blaming programmers or even programming methodology per se. His criticism is at the highest level; for the corporate culture and processes and for the regulatory environment which governs these corporations. In this I generally agree with him, although I could probably nitpick some of this points. But first, the question of the software developer and what they should, can, and sometimes don’t understand.

There was a time, in my own career and (I would assume) in the career of the author, that statements about the requisite knowledge of programmers made sense. It was probably even industry practice to ensure that developers of control system software understood the controls engineering aspects of what they were supposed to be doing. Avionics software was probably an exception, rather than the rule, in that the industry was an early adopter of formal processes. For much of the software-elements-of-engineering-systems industry, programmers came from a wide mix of backgrounds and a key component of that background was what programmers might call “domain experience.” Fortran could be taught in a classroom but ten years worth of industry experience had to come the hard way.

Since we’ve been reminiscing about the artificial intelligence industry of the 80s and 90s, I’ll go back there again. I’ve discussed the neural network state-of-the-art, such as it was, of that time. Neutral networks were intended to allow the machines to extract information about the system which the programmers didn’t have. Another solution to the same category of problems, again one that seemed to hold promise, was a category called expert systems, which was to directly make use of those experts who did have the knowledge that the programmers lacked. Typically, expert systems were programs built around a data set of “rules.” The rules contained descriptions of the action of the software relative to the physical system in a way that would be intuitive to a non-programmer. The goal was to be a division of labor. Software developers, experts in the programming, would develop a system to collect, synthesize, and execute the rules in an optimized way. Engineers or scientists, experts in the system being controlled, would create those rules without having to worry about the software engineering.

Was this a good idea? In retrospect, maybe not. While neural networks have found a new niche in today’s software world, expert systems remain an odd artifact found on the fringes of software design. So if it isn’t a good idea, why not? One question I remember being asked way-back-when got to the why. Is there anything you can do with a rule-based system that you couldn’t also implement with standard software techniques? To put it another way, is my implemented expert system engine capable of doing anything that I couldn’t have my C++ team code up? The answer was, and I think obviously, “no.” Follow that with, maybe, a justification about improved efficiencies in terms of development that might come from the expert systems approach.

Why take this particular trip down memory lane? Boeing’s system is not what we’d classify as AI. However, I want to focus on a particular software flaw implicated as a proximate cause of the crashes; the one that uses the pitch (angle-of-attack) sensors to avert stalls. Aboard the Boeing MAX, this is the”Maneuvering Characteristics Augmentation System” (MCAS). It is intended to enhance the pilot’s operation of the plane by automatically correcting for, and thereby eliminating, rare and non-intuitive flight conditions. Explaining the purpose of the system with more pedestrian terminology, Mr. Travis’ blog calls it the “cheap way to prevent a stall when the pilots punch it” system. It was made a part of the Boeing MAX as a way to keep the airplane’s operation the same as how it always has been, using feedback about the angle-of-attack to avoid a condition that could occur only on a Boeing MAX.

On a large aircraft, the pitch sensors are redundant. There is one on each side of the plane and both the pilot and the co-pilot have indicators for their side’s sensor. Thus, if the pilot’s sensor fails and he sees a faulty reading, his co-pilot will still be seeing a good reading and can offer a different explanation for what he is seeing. As implemented, the software is part of the pilot’s control loop. MCAS is quickly, silently, and automatically doing what the pilot would be doing, where he to have noticed that the nose of the plane was rising toward a stall condition at high altitude. What it misses is the human interaction between the pilot and his co-pilot that might occur if the nose-up condition were falsely indicated by a faulty sensor. The pilot might say, “I see the nose rising too high. I’m pushing the nose down, but it doesn’t seem to be responding correctly.” At this point, the co-pilot might respond, “I don’t see that. My angle-of-attack reports normal.” This should lead them to deciding the pilot should not, in fact, be responding from the warning produced by his own sensor.

Now, according the the Wall St. Journal article, Boeing wasn’t so blind as to simply ignore the possibility of a sensor failure. This wasn’t explained in the Medium article, but there are (and were) other systems that should have alerted a stricken flight crew to an incompatible difference in values between the two angle-of-attack sensors. Further, there was a procedure, called the “runaway stabilizer checklist” that was to be enacted under that condition. Proper following of that checklist (within the 4 second window, mind you) would have resulted in the deactivation of the MCAS system in reaction to the sensor failure. But why not, instead, design the MCAS system to either a) take all available, relevant sensors as input before assuming corrective action is necessary or b) take as input the warning about conflicting sensor reading? I won’t pretend to understand what all goes into this system. There are probably any number of reasons; some good, some not-so-good, and some entirely compelling; that drove Boeing to this, particular solution. It is for that reason I led off using my expert system as an analogy; since I’m making the analogies I can claim I understand, entirely, the problem that I’m defining.

Back then, a fellow engineer and enthusiast for technologies like expert systems and fuzzy logic (a promising technique to use rules for non-binary control) explained it to me with a textbook example. Imagine, as we’ve done before, you have a self-driving car. In this case, its self-driving intelligence uses a rule-based expert system for high-level decision making. While out and about, the car comes to an unexpected fork in the road. In computing how to react, one rule says to swerve left and one says to swerve right. In a fuzzy controller, the solution to conflicting conclusions might be to weight and average the two rule outputs. As a result, our intelligent car would elect to drive on, straight ahead, crashing into the tree that had just appeared in the middle of the road. The example is oversimplified to the point of absurdity, but it does point out a particular, albeit potential, flaw with rule-based systems. I also think it helps explain, by analogy, the danger lurking in the control of complex systems when your analysis is focused on discrete functions.

With the logic for your system being made up of independent components, the overall system behavior becomes “emergent” – a combination of the rule base and the environment in which it operates. In the above case, each piece of the component logic dictated swerving away from the obstacle. It was only when the higher-level system did its stuff that the non-intuitive “don’t swerve” emerged. Contrasting rule based development with more traditional code design, the number of possible states may be indeterminate by design. Your expert input might be intended to be partial, completed only when synthesized with the operational environment. Or look at it by way of the quality assuredness problem it creates. While you may be creating the control system logic without understanding the entire environment within which it will operate, wouldn’t you still be required to understand, exhaustively, that entire environment when testing? Otherwise, how could you guarantee what the addition of one more expert rule would or wouldn’t do to your operation?

Modern software engineering processes have been built, to a large extent, based on an understanding that the earlier you find a software issue, the cheaper it is to solve. A problem identified in the preliminary, architectural stage may be trivial. Finding and fixing something during implementation is more expensive, but not as expensive as creating a piece of buggy software that has to be fixed either during the full QA testing or, worse yet, after release. Good design methodologies also eliminate, as much as possible, the influence that lone coders and their variable styles and personalities might have upon the generation of large code bases.  We now feel that integrated teams are superior to a few eccentric coding geniuses. This goes many times over when it comes to critical control systems upon which people’s lives may depend. Even back when, say, an accounting system might have been cobbled together by a brilliant hacker-turned-straight, avionics software development followed rigid processes meant to tightly control the quality of the final product. This all seems to be for the best, right?

Yes, but part of what I see here is a systematization that eliminates not just the bad influences of the individual, but their creative and corrective influence as well. If one person had complete creative control over the Boeing MAX software, that person likely would never have shipped something like the MCAS reliance on only one of a pair of sensors. The way we write code today, however, there may be no individual in charge. In this case, the decision to make the MCAS a link between the pilot’s control stick and the tail rudder rather than an automated response at a higher level isn’t a software decision; its a cockpit design decision. As such it’s not only outside of the purview of software design, but perhaps outside of the control of Boeing itself if it evolved as a reaction to a part of the regulatory structure. In a more general sense, though, will the modern emphasis on team-based, structured coding methodology have the effect of siloing the coders? A small programming team who has been assigned a discrete piece of the puzzle not only doesn’t have responsibility for seeing the big picture issues, those issues won’t even be visible to them.

In other words (cycling back to that comment on my friend’s posting many months ago), shouldn’t the software developers should understand the underlying systems for which they are writing software? Likely, the design/implementation structure for this part of the system would mean that it wouldn’t be possible for a programmer to see that either a) the sensor they are using as input is one of a redundant pair of sensors and/or b) there is separate indication that might tell them whether the sensor they are using as input is reliable. Likewise the large team-based development methodologies probably don’t attract to the avionics software team the programmer who is also a controls engineer who also has experience piloting aircraft – that ideal combination of programmer and domain expert that we talked about in the expert system days. I really don’t know whether this is an inevitable direction for software development or if this is something that is done for better or for worse as we look at different companies. If the latter, the solutions may simply be with culture and management within software development companies.

So far, I’ve mostly been explaining why we shouldn’t point the figure at the programmers, but neither of the articles do. In both cases, blame seems to be reserved for the highest levels of aircraft development; at the business level and the regulatory level. The Medium article criticizes the use of engineered solutions to allow awkward physics as solutions to business problems (increasing the capacity of an existing plane rather than, expensively, creating a new one). The Wall St. Journal focuses on the philosophy that pilots will respond unerringly to warning indicators, often under very tight time constraints and under ambiguous and conflicting conditions. Both articles would tend to fault under-regulation by the FAA, but heavy-handed regulation may be just as much to blame as light oversight. Particularly, I’m thinking of the extent to which Boeing hesitated to pass information to the customers for fear of triggering expensive regulatory requirements. When regulations encourage a reduction in safety, is the problem under-regulation or over-regulation?

Another point that jumped out at me in the Journal article is that at least one of the redesigns that went into the Boeing MAX was driven by FAA high-level design requirements for today’s human-machine interfaces for aircraft control. From the WSJ:

[Boeing and FAA test pilots] suggested MCAS be expanded to work at lower speeds so the MAX could meet FAA regulations, which require a plane’s controls to operate smoothly, with steadily increasing amounts of pressure as pilots pull back on the yoke.

To adjust MCAS for lower speeds, engineers quadrupled the amount the system could repeatedly move the stabilizer, to increments of 2.5 degrees. The changes ended up playing a major role in the Lion Air and Ethiopian crashes.

To put this in context, the MCAS system was created to prevent an instability at some high altitude conditions, conditions which came about as a results of larger engines that had been moved to a suboptimal position. Boeing decided that this instability could be corrected with software. But if I’m reading the above correctly, there are FAA regulations focus on making sure a fly-by-wire system still feels like the mechanically-linked controls of yore, and MCAS seemed perfectly suited to help satisfy that requirement as well. Pushing this little corner of the philosophy too far may have been a proximate cause of the Boeing crashes. Doesn’t this also, however, point to a larger issue? Is there a fundamental flaw with requiring that control systems artificially inject physical feedback as a way to communicate with the pilots?

In some ways, its a similar concern to what I talked about with the automated systems in cars. In addition to the question whether over-automation is removing the connection between the driver/pilot and the operational environment, there is, for aircraft, an additional layer. An aircraft yoke’s design came about because it directly linked to the control surfaces. In a modern plane, the controls do not. Today’s passenger jet could just as well use a steering wheel or a touch screen interface or voice-recognition commands. The designs are how they are to maintain a continuity between the old and the new, not necessarily to provide the easiest or most intuitive control of the aircraft as it exists today. In addition, and by regulatory fiat apparently, controls are required to mimic that non-existent physical feedback. That continuity and feedback may also be obscuring logical linkages between different control surfaces that could never have existed when the interface was mechanically linked to the controlled components.

I foresee two areas where danger could creep in. First, the pilot responds to the artificially-induced control under the assumption that it is telling him something about the physical forces on the aircraft. But what if there is a difference? Could the pilot be getting the wrong information? It sure seems like a possibility when feedback is being generated internally by the control system software. Second, the control component (in this case, the MCAS system) is doing two things at once; stabilizing the aircraft AND providing realistic feedback to the pilots by feel through the control yoke. Like the car that can’t decide whether to swerve right or left, such a system risks, in trying do both, getting neither right.

I’ll sum up by saying I’m not questioning the design of modern fly-by-wire controls and cockpit layouts; I’m not qualified to do so. My questions are about the extent to which both regulatory requirements and software design orthodoxy box in the range of solutions available to aircraft-control designers in a way that limits the possibilities of creating safer, more efficient, and more effective aircraft for the future.

Artificial, but Intelligent? Part 2

I just finished reading Practical Game AI Programming: Unleash the power of Artificial Intelligence to your game. My conclusion is there is a lot less to this book than meets the eye.

For someone thinking of purchasing this book, it would be difficult to weigh that decision before committing. The above link to Amazon has (as of this writing) no reviews. I’ve not found any other, independent evaluations of this work. Perhaps you could make a decision simply by studying the synopsis of this book before you buy it. Having done that, it is possible that you’d be prepared for what it offered. Having read the book, and then going back and reading the Amazon summary (which is taken from the publisher’s website), I find that it more or less describes the book’s content. In my case, I picked this book up as part of a Humble Book Bundle, so it was something of an impulse buy. I didn’t dig too hard into the description and instead worked my way through the chapters with my only expectations being based on the title.

Even applying the highest level of pre-purchase scrutiny only gets you so far. The description may indicate that the subject matter is of interest, but it is still a marketing pitch. It gives you no idea of the quality of either the information or the presentation. Furthermore, I think someone got a little carried away with their marketing hype. The description also tosses out some technical terms (e.g. rete algorithm, forward chaining, pruning strategies) perhaps meant to dazzle the potential buyer with AI jargon. The problem is, these terms don’t even appear in the book, much less get demonstrated as a foundation for game programing. I feel that no matter how much upfront research you did before you bought, you’d come away feeling you got less than you bargained for.

What this book is not is a exploration of artificial intelligence as I have discussed that term previously on this website. This is not about machine learning or generic decision-making algorithms or (despite the buzz words) rule-engines. The book mentions applications like Chess only in passing. Instead, the term “AI” is used as a gamer might. It discusses a few tricks that a game programmer can use to make the supposedly-intelligent entities within a game appear to have, well, intelligence when encountered by the player.

The topic that it does cover does, in fact, have some merit. The focus is mostly on simple algorithms and minimal code required to create the impression of intelligent characters within a game. Some of the topics I found genuinely enlightening. The overarching emphasis on simplicity is also something makes sense for programmers to aspire to. There is no need to program a character to have a complex motivation if you can, with only a few lines of code, program him to appear to have such complex motivation. It is just that I’m not sure that these lessons qualify as “unleashing the power of Artificial Intelligence” by anyone’s definition.

But even before I got that far, my impression started off very bad. The writing in this book is rather poor, in terms of grammar, word usage, and content. In some cases, misused words become so jarring as to make it difficult to read through a page. Elsewhere, there will be several absolutely meaningless sentences strung together, perhaps acknowledging that a broader context is required but not knowing how to express it. At first, I didn’t think I was going to get very far into the book. After a chapter or so, however, reading became easier. Part of it may be my getting used to the “style,” if one can call it that. Part of it may also be that there is more “reaching” in the introductory and concluding sections but less when writing about concrete points.

I can’t say for sure but it is my guess, based on reading through the book, that the author does not use English as his primary language. I sometimes wondered if the text was written first in another language and then translated (or mistranslated, as the case may be) into English. Beyond that, the book also does not seem to have benefited from the work of a competent editor.

The structure of the chapters, for the most part, follows a pattern. A concept is introduced by showcasing some “classic” games with the desired behavior. Then some discussion about the principle is followed by coding example, almost always in Unity‘s C# development environment. This is often accompanied by screenshots of Unity’s graphics, either in development mode or in run-time. Most of the chapters, however, feel “padded.” Screenshots are sometimes repetitious. Presentation of the code is done incrementally, with each new addition requiring the re-printing of all of the sample code shown so far along with the new line or lines added in. By the end of the chapter, adding a concept might consist of two explanatory sentences, 3 screenshots, and two pages of sample code, 90% of which is identical to the sample code several pages earlier in the book. This is not efficient and I don’t think it is useful. It does drive the page count way up.

I want to offer a caveat for my review. This is the first book I’ve read from this publisher. When reading about some of their other titles, it was explained that the books come with sample source code. If you buy the book directly from the publisher’s website (which I did not), the sample code is supposed to be downloaded along with the book text. If you buy from a third party, they provide a way to register your purchase on the publisher’s site to get access to the downloads. I did not try this. If this book does have downloaded samples that can be loaded into Unity, and those samples are well-done, that has a potential for adding significant value over the book on its own.

Back to the chapters. When I start going through the chapters, again it feels like there is some “padding” going on to make the subject matter seem more extensive than it is. The book starts with two chapters on Finite State Machines FSM and how that logic can be used to drive an “AI” character’s reactions to the player. Then the book takes a detour into Unity’s support for a Finite State Machine implementation of animations, which has its own chapter. This is most irrelevant to the subject of game AI and also, likely, of little value if you’re not using Unity.

After the animation chapter, we head back into the AI world with a discussion of the A*, and the Theta* variant thereof, pathfinding algorithm. This discussion is accompanied by a manual optimization solution of a simple square-grid based 2D environment, describing each calculation and illustrating each step. I do appreciate the concrete example of the algorithm in action. Many explanations of this topic I’ve found on-line simply show code or pseudo-code and leave it to the “student” to figure it all out. In this case, I think he managed to drive the page count up by and order of magnitude over what would have been sufficient to explain it clearly.

The final chapters show how Unity’s colliders and raycasting can be used to implement both collision avoidance and vision/detection systems. These are two very similar problems involving reacting to other objects in the environment that, themselves, can move around. As I said earlier, there are some useful concepts here, particularly in emphasizing a “keep it simple” design philosophy. If you can use configurable attributes on your development tool’s existing physics system to do something, that’s much preferable to generating your own code base. That goes double if the perception for the end user is indistinguishable, one method from the other. However, I also get the feeling that I’m just being shown some pictures of simple Unity capabilities, rather than “unleashing the power of AI” in any meaningful sense.

A few year’s back, I was trying to solve a similar problem, but trying to be predictive about the intent of the other object. For example, if I want to plot an intercept vector to a moving target but that target is not, itself, moving at a constant rate or direction, I need a good bit more math than the raycasting and colliders provide out of the box. Given the promise of this book’s subject matter, that might be a problem I’d expect to find, perhaps in the next chapter.

Alas, after discussing the problem of visual detection involving both direction and obstacles, the book calls an end to its journey. With the exception of the A* algorithm, the AI solutions consist almost entirely of Unity 3D geometry calls.

Although the book claims to be written in a way such that each chapter can be applied to a wide range of games, I feel like it narrows its focus as it progresses. The targeted game is, and I struggle with how to describe it so I’ll just pick an example, the heirs to the DOOM legacy. By this, I mean games where the player progresses through a series of “levels” in order to complete the game. What the player encounters through those levels is imagined and created by the designer so as to construct the story of the game. The term AI, then, distinguishes between different kinds of encounters, at least as far as the player perceives them. For example, the player might find herself rushing across a bridge, which starts to collapse when they reach the middle. This requires no “AI.” There is simple programmed in that, when the player reaches a certain point on the bridge, call the “collapseBridge” routine. If she makes it past the bridge and into the next chamber, where there are a bunch of gremlins that want to do her in, the player starts considering the “AI” of those gremlins. Do they react to what the player does, adopting different tactics depending on her tactics? If so, she might praise the “AI.” By the books end, the focus is entirely on awareness of and reaction between mobile elements of a game which, by defining the problem as such, is the subset of games in this category.

My harping on the narrow focus of this book goes to the determination of its value. If this book were free or very low cost, you would have to decide whether the poor use of English and the style detract from whatever useful information is presented. The problem with that is the price this book asks. The hardcopy (paperback) of the book is $50.00. The ebook is $31.19 on Amazon, discounted to $28 if you buy directly from the publisher’s site. All of those seem like a lot of money, per my budget. Now, my own price I figure to have been $7. I bought the $8 bundle package over the $1 package purely based on interest in this title. This is the first book in that set I’ve read, so if some of the others are good, I might consider the cost to be even lower. Still, even at $5, I feel like I’ve been cheated a bit by the content of this book.

The bundle contained other books from this same publisher, so I’ll plan to read at least one other before drawing any conclusions about their whole library. Assuming that the quality of this book is, in fact, an outlier, this is still a risk to the publisher’s reputation. When one of your books is overpriced and oversold, the cautious buyer should assume that they are all overpriced and oversold. Looking at the publisher’s site, this book has nothing but positive reviews. It’s really a blemish on the publisher as a whole.

Although I won’t go so far as to say “I wish I hadn’t wasted the time I spent reading this,” I can’t imagine any purchaser for whom this title would be worth the money.

The Pride and Disgrace

Handful of Senators don’t pass legislation.

On August 10th, 1964, the Gulf of Tonkin Resolution was enacted. It was the quick response from America’s politicians to the attack upon a U.S. naval vessel off the coast of Vietnam by torpedo boats from the communist regime of North Vietnam. That attack had occurred on August 2nd with an additional “incident” taking place on August 4th. The resolution (in part) authorized the President “to take all necessary steps, including the use of armed force, to assist any member or protocol state of the Southeast Asia Collective Defense Treaty requesting assistance in defense of its freedom.” This would come to justify the deployment of U.S. troops to directly engage the enemies of South Vietnam.

Before this time, South Vietnam was fighting to eliminate a communist insurgency driven by remnants of the Việt Minh. The communist guerillas had agreed to resettle in the North as part of the Geneva Agreement which ended France’s war in Vietnam in 1954. The United States saw their continued attacks on the government of South Vietnam as a violation of that peace. In particular, their support obtained from across national borders was considered to be a part of a strategic plan by the Soviet Union and China to spread communism throughout all of the countries of Southeast Asia.

Even before the withdrawal of France, the United States had supported the anti-communist fight with money, matériel, and military personnel (in the form of advisors). After the French exit, a steady increase in commitment from the U.S was evident. Nevertheless, the signing of the Gulf of Tonkin resolution marks a milestone in American’s involvement making it arguable the start of the U.S. War in Vietnam. Although it would take almost another year for U.S. forces to become clearly engaged, Presidents Johnson’s reaction to the Gulf of Tonkin incidents seems to have set the course inevitably toward that end.

I’m sittin’ here.
Just contemplatin’.
I can’t twist the truth.
It knows no regulation.

The anniversary and the nature of it got me to thinking about how one might portray the Vietnam War as a game. Given the context, I’m thinking purely along the lines of the game focused at the strategic level, taking into account the political and international considerations that drove the course of the conflict.

From a high-level perspective, one might divide America’s war in Vietnam into 4 distinct phases. In the first, the U.S. supported Vietnam’s government with financial and military aid, and with its advisors. While U.S. soldiers were, in fact, engaging in combat and being killed, it wasn’t as part of American combat units, allowing the U.S. to convince itself that this was a conflict purely internal to South Vietnam. Through the presidencies of Eisenhower, Kennedy, and Johnson, the amount of aid to, and the number of Americans in, Vietnam increased. However, the big change, and the transition of the second phase, can be located after the passage of the Gulf of Tonkin Resolution and the subsequent deployment of the U.S. Marines directly and as a unit.

At first, U.S. direct involvement carried with it a measure of popular support and was, from a purely military standpoint, overwhelmingly successful. Johnson and the military were wary of pushing that involvement in ways that would turn the public opinion against them. The U.S. feared casualties as well signs of escalation that be interpreted as increasing military commitment (for example, extending the service time for draftees beyond twelve months), but in general this was a period of an increasing U.S. buildup and, generally, successful operations. Nevertheless, progress in the war defied a clear path toward resolution.

The third phase is probably delineated by the 1968 Tet offensive. While still, ultimately, a military success from the U.S. standpoint, the imagery of Viet Cong forces encroaching on what were assumed by all to be U.S.-controlled cities turned opinion inexorably against continuing engagement in Vietnam. The next phase, then, was what Nixon called “Vietnamization,” the draw-down of American direct involvement to be replaced with support for the Army of the Republic of Vietnam (ARVN). Support was again in the form of money, equipment, and training as well as combat support. For example, a transition to operations where ARVN ground units would be backed by U.S. air power.

The final phase is where that withdrawl is complete, or at least getting close to that point. Where joint operations were no longer in the cards. Clearly this phase would describe the post-Paris accords situation, after Nixon’s resignation, as well as encompassing the final North Vietnamese operation that rolled up South Vietnam and Saigon.

From a gaming perspective, and a strategic-level gaming perspective at that, the question becomes what decisions are there for a player to make within these phases and, perhaps more importantly, what decisions would prompt the transition from one phase to another.

The decision to initially deploy U.S. troops, made by Johnson in early 1965, seems to have been largely driven by events. Having Johnson as president was probably a strong precondition. Although he ran against Goldwater on a “peace” platform, the fact that he saw his legacy as being tied into domestic policy probably set up the preconditions for escalation. A focus on Vietnam was never to be part of his legacy, but given the various triggers in late 1964 and early 1965, his desire to avoid a loss to communism in Vietnam propelled his decision to commit ground troops. You might say his desire to keep it from being a big deal resulted in it being a big deal.

Where this all seems to point is that any strategic Vietnam game beginning much before Spring of 1965 must restrict the player from making the most interesting decision; if and when to commit U.S. ground troops and launch into an “American” war.

Amusingly, if you subscribe to the right set of conspiracy theories, the pivotal events might really be under control of a grand-strategic player after all. Could it be that the real driver behind Kennedy’s assassination was to put a President in office who would be willing to escalate in Vietnam? Was the deployment of the USS Maddox on the DESOTO electronic warfare mission meant to provoke a North Vietnamese response? How about the siting of aviation units in Vietnam at places like Camp Holloway, which would become targets for the Viet Cong? Where actual aggression by the North wasn’t sensational enough, were details fabricated? This rather far-flung theorizing would not only make the resulting game that much harder to swallow, but it is also difficult to see how any fully-engineered attempt to insert American into Vietnam could have moved up the timetable.

So it would only make sense to start our game with our second phase, which must come after our Gulf of Tonkin incident and the 1968 presidential election, at a minimum.

The remaining game will still be an unconventional one, although we do have some nice examples of how it could be done from over the years. Essentially, the U.S. will always be able to draw upon more military power and, ultimately, sufficient military power to prevail in an particular engagement. Yet while it is possible, though insufficient planning, to achieve a military loss as the U.S. it is probably not going to be possible to achieve a military victory. On the U.S. side, the key parameters are going to be some combination of resources and “war weariness.”

Our Vietnam game would rule out, either explicitly or implicitly, a maximal commitment to victory by the United States. American planners considered options such as unfettered access to Laos and Cambodian, an outright invasion of North Korea, or even tactical nuclear weapons. The combination of deteriorating domestic support and the specter of overt Chinese and Soviet intervention would seem to be a large enough deterrent to prevent exercise of these options. This is one of the reasons that rules (those that I’ve come across, any way) simply forbid, for example, crossing units into North Vietnam.

The other reason is one of scope. If a ground invasion of North Vietnam is on the table, then the map needs to include all the potential battlefields in the North in addition to the actual battlefields of the South. Likewise extended areas within Cambodia and Laos need to be available to the player. Continuing on, if U.S. ground forces are going to be straying that close to North Vietnam’s northern border, might it not be necessary to include China in as well? Perhaps having learned a lesson from Korea, our player would react to Chinese direct intervention by taking the fight onto Chinese sovereign territory. It doesn’t take long before we have to consider adding Germany, Korea, Cuba, and any other hotspot of the time as a potential spillover for escalation in Vietnam. Besides the problem of the game expanding without limitations, we have another design concern. A Vietnam game narrative adhering closely to the historical path has the advantage of actual battles, strategies, and events on which to model itself. If all the forces of NATO, the Warsaw Pact, and China are fair game, we are now in the realm of pure speculation.

If for no other reason than to maintain the sanity of the designer, it seems that rules which quickly push the U.S. into its historical de-escalation policy is the right way to tie off such a game on the other end. I will save the consideration of how that might work to another time.

Artificial, Yes, but Intelligent?

Keep your eyes on the road, your hands upon the wheel.

When I was in college, only one of my roommates had a car. The first time it snowed, he expounded upon the virtues of finding an empty and slippery parking lot and purposely putting your car into spins.  “The best thing about a snow storm,” he said. At the time I thought he was a little crazy. Later, when I had the chance to try it, I came to see it his way. Not only is it much fun to slip and slide (without the risk of actually hitting anything), but getting used to how the car feels when the back end slips away is the first step in learning how to fix it, should it happen when it actually matters.

Recently, I found myself in an empty, ice-covered parking lot and, remembering the primary virtue of a winter storm, I hit the gas and yanked on the wheel… but I didn’t slide. Instead, I encountered a bunch of beeping and flashing as the electronic stability control system on my newish vehicle kicked in. What a disappointment it was. It also got me thinkin’.

For a younger driver who will almost never encounter a loss-of-traction slip condition, how do they learn how to recover from a slide or a spin once it starts? Back in the dark ages, when I was learning to drive, most cars were rear-wheel-drive with a big, heavy engine in the front. It was impossible not to slide around a little when driving in a snow storm. It was almost a prerequisite to going out into the weather to know all the tricks of slippery driving conditions. Downshifting (or using those number gears on your automatic transmission), engine breaking, and counter steering were all part of getting from A to B. As a result*, when an unexpectedly slippery road surprises me, I instinctively take my foot off the brakes/gas and counter-steer without having to consciously remember the actual lessons. So does a car that prevents sliding 95% of the time result in a net increase in safety, even though it probably makes that other 5% worse? It’s not immediately obvious that it does.

On the Road

I was reminded of the whole experience a month or so ago when I read about the second self-driving car fatality. Both crashes happened within a week or so of each other in Western states; the first in Arizona and the second in California. In the second crash, Tesla’s semi-autonomous driving function was in fact engaged at the time of the crash and the drivers hands were not on the wheel six seconds prior. Additional details do not seem to be available from media reports, so the actual how and why must remain the subject of speculation. In the first, however, the media has engaged in the speculation for us. In Arizona, it was an Uber vehicle (a Volvo in this case) that was involved and the fatality was not the driver. The media has also reported quite a lot that went wrong. The pedestrian who was struck and killed was jaywalking, which certainly is a major factor in her resulting death. Walking out in front of a car at night is never a safe thing to do, whether or not that car self-driving. Secondly, video was released showing the driver was looking at something below the dashboard level immediately before the crash, and thus was not aware of the danger until the accident occurred. The self-driving system itself did not seem to take any evasive action.

Predictably, the Arizona state government responded by halting the Uber self-driving car program. More on that further down, but first look at the driver’s distraction.

After the video showing such was released, media attention focused on the distracted-driving angle of the crash. It also brought up the background of the driver, who had a number of violations behind him. Certainly the issue of electronics and technology detracting from safe driving is a hot topic and something, unlike self-driving Uber vehicles, that most of us encounter in our everyday lives. But I wonder if this exposes a fundamental flaw in the self-driving technology?

It’s not exactly analogous to my snow situation above, but I think the core question is the same. The current implementation of the self-driving car technology augments the human driver rather then replaces him or her. In doing so, however, it also removes some of the responsibility from the driver as well as making him more complacent about the dangers that he may be about to encounter. The more that the car does for the driver, the greater the risk that the driver will allow his attention to wander rather that stay focused, on the assumption that the autonomous system has him covered. In the longer term, are there aspects of driving that the driver will not only stop paying attention to, but lose the ability to manage in the way a driver of a non-automated car once did?

Naturally, all of this can be designed into the self-driving system itself. Even if a car is capable of, essentially, driving itself over a long stretch of a highway, it could be designed to engage the driver every so many seconds. Essentially requiring unnecessary input from the operator can be used to make sure she is ready to actively control the car if needed. I note that we aren’t breaking new ground here. A modern aircraft can virtually fly itself, and yet some part of the design (plus operational procedures) are surely in place to make sure that the pilots are ready when needed.

As I said, the governmental response has been to halt the program. In general, it will be the governmental response the will be the biggest hurdle for self-driving car technology.

In the specific case of Arizona, I’m not actually trying to second guess their decision. Presumably, they set up a legal framework for the testing of self-driving technology on the public roadways. If the accident in question exceeded any parameters of that legal framework, then the proper response would be to suspend the testing program. On the other hand, it may be that the testing framework had no contingencies built into it, in which case any injuries or fatalities would have to be evaluated as they happen. If so, a reactionary legal response may not be productive.

I think, going forward, there is going to be a political expectation that self-driving technology should be flawless. Or, at least, perfect enough that it will never cause a fatality. Never mind that there are 30-40,000 motor vehicle deaths per year in the United States and over a million per year world wide. It won’t be enough that an autonomous vehicle is safer than than a non-autonomous vehicle; it will have to be orders-of-magnitude safer. Take, as an example, passenger airline travel. Despite a rate that is probably about 10X safer for aircraft over cars, the regulatory environment for aircraft is much more stringent. Take away the “human” pilot (or driver) and I predict the requirements for safety will be much higher than for aviation.

Where I’m headed in all this is, I suppose, to answer the question about when we will see self driving cars. It is tempting to see that as a technological question – when will the technology be mature enough to be sold to consumers? But it is more than that.

I recall see somewhere an example of “artificial intelligence” for a vehicle system. The example was of a system that detected a ball rolling across the street being a trigger for logic that anticipates there might be a child chasing that ball. A good example of an important problem to solve before putting an autonomous car onto a residential street. Otherwise, one child run down while he was chasing his ball might be enough for a regulatory shutdown. But how about the other side of that coin? What happens the first time a car swerves to avoid a non-existent child and hits an entirely-existent parked car? Might that cause a regulatory shutdown too?

Is regulatory shutdown inevitable?

Robo-Soldiers

At roughly the same time that the self-driving car fatalities were in the news, there was another announcement, even more closely related to my previous post. Video-game developer EA posted a video showing the results of a multi-disciplinary effort to train a AI player for their Battlefield 1 game (which, despite the name is actually the fifth version of the Battlefield series). The narrative for this demo is similar to that of Google’s (DeepMind) chess program. The training was created, as the marketing pitch says, “from scratch using only trial and error.” Without viewing it, it would seem to run counter to my previous conclusions, when I figured that the supposed generic, self-taught AI was perhaps considerably less than it appeared.

Under closer examination, however, even the minute-and-a-half of demo video does not quite measure up to the headline hype, the assertion that neural nets have learned to play Battlefield, essentially, on their own. The video explains that the training methods involves manually placing rewards throughout the map to try to direct the behavior of the agent-controlled soldiers.

The time frame for a project like this one would seem to preclude them being directly inspired by DeepMind’s published results for chess. Indeed, the EA Technical Director explains that it was earlier DeepMind work with Atari games that first motivated them to apply the technology to Battlefield. Whereas the chess example demonstrated ability to play chess at a world class level, the EA project demonstration merely shows that the AI agents grasp the basics of game play and not much more. The team’s near-term aspirations are limited; use of AI for quality testing is named as an expected benefit of this project. He does go so far as to speculate that a few years out, the technology might be able to compete with human players within certain parameters. Once again, a far cry from a self-learning intelligence poised to take over the world.

Even still, the video demonstration offers a disclaimer. “EA uses AI techniques for entertainment purposes only. The AI discussed in this presentation is designed for use within video games, and cannot operate in the real world.”

Sounds like they wanted to nip any AI overlord talk in the bud.

From what I’ve seen of the Battlefield information, it is results only. There is no discussion of the methods used to create training data sets and design the neural network. Also absent is any information on how much effort was put into constructing this system that can learn “on its own.” I have a strong sense that it was a massive undertaking, but no data to back that up. When that process becomes automated (or even part of the self-evolution of a deep neural network), so that one can quickly go from a data set to a trained network (quickly in developer time, as opposed to computing time), the promise of the “generic intelligence” could start to materialize.

So, no, I’m not made nervous that an artificial intelligence is learning how to fight small unit actions. On the other hand, I am surprised at how quickly techniques seem to be spreading. Pleasantly surprised, I should add.

While the DeepMind program isn’t open for inspection, some of the fundamental tools are publicly available. As of late 2015, the Google library TensorFlow is available in open source. As of February this year, Google is making available (still in beta, as far as I know) their Tensor Processing Unit (TPU) as a cloud service. Among the higher-profile uses of TensorFlow is the app DeepFake, which allows its users to swap faces in video. A demonstration compares the apps performance, using a standard desktop PC and about a half-an-hour’s training time to produce something comparable to Industrial Light and Magic’s spooky-looking Princess Leia reconstruction.

Meanwhile, Facebook also has a project inspired by DeepMind’s earlier Go neural network system. In a challenge to Google’s secrecy, the Facebook project has been made completely open source allowing for complete inspection and participation in its experiments. Facebook announced results, at the beginning of May, of a 14-0 record of their AI bot against top-ranked Go players.

Competition and massive-online participation is bound to move this technology forward very rapidly.

 

The future’s uncertain and the end is always near.

 

*To be sure, I learned a few of those lessons the hard way, but that’s a tale for another day.

ABC Easy as 42

Teacher’s gonna show you how to get an ‘A’

In 1989, IBM hired a team of programmers out of Carnegie Mellon University. As part of his graduate program, team leader Feng-hsiung Hsu (aka Crazy Bird) developed a system for computerized chess playing that the team called Deep Thought. Deep Thought, the (albeit fictional) original, was the computer created in Douglas Adam’s The Hitchhiker’s Guide to the Galaxy to compute the answer for Life, the Universe, and Everything. It was successful in determining the answer was “42,” although it remained unknown what the question was. CMU’s Deep Thought, less ambitiously, was a custom designed hardware-and-software solution for solving the problem of optimal chess playing.

Once at IBM, the project was renamed Deep Blue, with the “Blue” being a reference to IBM’s nickname of “Big Blue.”

On February 10th, 1996, Deep Blue won its first game against a chess World Champion, defeating Garry Kasparov. Kasparov would go on to win the match, but the inevitability of AI superiority was established.

Today, computer programs being able to defeat humans is no longer in question. While the game of chess may never be solved (à la checkers), it is understood that the best computer programs are superior players to the best human beings. Within the chess world, computer programs only make news for things like when top players may using programs to gain an unfair advantage in tournament play.

Nevertheless, a chess-playing computer was in the news late last year. Headlines reported that a chess playing algorithm based on neural networks, starting only from the rules of legal chess moves, in four hours created a program that could beat any human and nearly all top-ranked chess programs. The articles spread across the internet through various media outlets, each summary featuring their own set of distortions and simplifications. In particular, writers that had been pushing articles about the impending loss of jobs to AI and robots jumped on this as proof that the end had come. Fortunately, most linked to the original paper rather than trying to decipher the details.

Like most I found this to be pretty intriguing news. Unfortunately, I also happen to know a little (just a little, really) about neural networks, and didn’t even bother to read the whole paper before I started trying to figure out what had happened.

Some more background on this project. It was created at DeepMind, a subsidiary of Alphabet, Inc. This entity, formerly known simply as Google, reformed itself in the summer of 2015 with the new Google being one of many children of the Alphabet parent. Initial information suggested to me an attempt at creating one held company for each letter of the alphabet, but time has shown that isn’t their direction. As of today, while there are many letters still open, several have multiple entries. Oh well, it sounded more fun my way. While naming a company “Alphabet” seems a bit uninspired, there is a certain logic to removing the name Google from the parent entity. No longer does one have to wonder why an internet company is developing self-driving cars.

Google’s self driving car?

 

The last time the world had an Artificial Intelligence craze was in the 1980s into the early 1990s. Neural networks were one of the popular machine intelligence techniques of that time too. At first they seemed to offer the promise of a true intelligence; simply mimicking the structure of a biological brain could produce an ability to generalize intelligence, without people to craft that intelligence in code. It was a program that could essentially teach itself. The applications for such systems seemed boundless.

Unfortunately, the optimism was quickly quashed. Neural networks had a number of flaws. First, they required huge amounts of “training” data. Neural Nets work by finding relationships within data, but that source data has to be voluminous and it has to be suited to teaching the neural network. The inputs had to be properly chosen, so as to work well with the networks’ manipulation of that data and the data themselves had be properly representative of the space being modeled. Furthermore, significant preprocessing was required of the person organizing the training. Additional inputs would result in exponential increases in both the training data requirement and the amount of processing time to run through the training.

It is worthwhile to recall the computer power available to neural net programmers of that time. Even a high-end server of 35 years ago is probably put to shame by the Xbox plugged into your television. Furthermore, the Xbox is better suited to the problem. The mathematics capability of Graphical Processing Units (GPUs) is a more efficient design for solving these kinds of matrix problems. Just like Bitcoin mining, it is the GPU on a computer that is going to best be able to handle neural network training.

To illustrate, let me consider briefly a “typical” neural network application of the previous generation. One use is something called a “soft sensor.” Another innovation of that same time was the rapid expansion in capabilities of higher-level control systems for industrial processes. For example, some kind of factory wide system could collect real-time data (temperatures, pressures, motor speeds – whatever is important) and present them in an organized fashion to give an overview of plant performance and, in some cases, automate overall plant control. For many systems however, the full picture wasn’t always available in real time.

Let’s imagine the production of a product which has a specification limiting the amount of some impurity. Largely, we know what the right operating parameters of the system are (temperatures, pressures, etc) but to actually measure for impurities, we manually draw a sample, send it off to a lab for testing, and wait a day or two for the result. It would stand to reason that, in order to keep your product within spec, you must operate far enough away from the threshold that if it begins to drift, you would usually have time catch it before it goes out of spec. Not only does that mean you’re, most of the time, producing a product that exceeds specification (presumably at extra cost), but if the process ever moves faster than expect, you may have to trash a day’s worth of production created while you were waiting for lab results.

Enter the neural network and that soft sensor. We can create a database of the data that were collected in real time and correlate that data with the matching sample analyses that were available afterward. Then a neural network can be trained using the real-time measurements as input to produce an output predicting sample measurement. Assuming that the lab measurement is deducible from the on-line data, you now have in your automated control system (or even just as a presentation to the operators) a real time “measurement” of data that otherwise won’t be available until much later. Armed with that extra knowledge, you would expect to both cut operating costs (by operating tighter to specification) and prevent waste (by avoiding out of spec conditions before they happen).

That sounds very impressive, but I did use the word “assuming.” There were a lot factors that had to come together before determining that a particular problem was solvable with neural networks. Obviously, the result you are trying to predict has to, indeed, be predictable from the data that you have. What this meant in practice is that implementing neural networks was much bigger than just the software project. It often meant redesigning your system to, for example, collect data on aspects of your operation that were never necessary for control, but are necessary for the predictive functioning of the neural net. You also need lots and lots of data. Operations that collected data slowly or inconsistently might not be capable of providing a data set suitable for training. Another gotcha was that collecting data from a system in operation probably meant that said system was already being controlled. Therefore, a neural net could just as easily be learning how your control system works, rather than the underlying fundamentals of your process. In fact, if your control reactions were consistent, that might be a much easier thing for the neural net to learn that the more subtle and variable physical process.

The result was that many applications weren’t suitable for neural networks and others required a lot of prep-work. It might be necessary to rework your data collection system to get more and better data. It also was useful to pre-analyze the inputs to eliminate any dependent variables. Now, technically, that’s part of what the neural network should be good at – extracting the core dependencies from a complex system. However, the amount of effort – in data collected and training time – increases exponentially when you add inputs and hidden nodes, so simplifying a problem was well worth the effort. While it might seem like you can always just collect more data, remember that the data needed to be representative of the domain space.  For example, if the condition that results in your process wandering off spec only occurs once every three or four months, then doubling your complexity might mean (depending on your luck) increasing the data collection from a month or two to over a year.

Hopefully you’ll excuse my trip down a neural net memory lane, but I wanted to set your expectations of neural network technology where mine were, because the state of the art is very different than what it was. We’ve probably all seen some of the results with image recognition that seems to be one of the hottest topics in neural networks these days.

So back to when I read the article. My first thought was to think in terms of the neural network technology as I was familiar with it.

My starting point to design my own chess neural net has to be representations of the board layout. If you know chess, you probably have a pretty good idea how to describe a chess board. You can describe each piece using a pretty concise terminology . In this case, I figure it is irrelevant where a piece has been. So whether it started as a king’s knight’s pawn or a queen’s rook’s pawn, that doesn’t effect its performance. So you have 6 possible piece descriptors which need to be placed into the 64 squares that they could possibly reside upon. So, for example, imagine that I’m going to assign an integer to the pieces, and then use positive for white and negative for black:

Pawn Knight Bishop Rook King Queen
1 2 3 4 5 6

My board might look something like this 4,2,3,6,5,3,2,4,1,1,1,1,1,1,1,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0…-3,-2,-4

If I am still living in the 90s, I’m immediately going to be worried about the amount of data, and might wonder if I can compress the representation of my board based on assumptions about the starting positions. I’ve got all those zeros in the center of my matrix, and as the game progresses, I’m going to be getting fewer data and more zeros. Sixty-four inputs seems like a lot (double that to get current position and post-move position), and I might hope to winnow that down some manageable figure with the kind of efforts that I talked about above.

If I hadn’t realized my problem already, I’d start to figure it out now. Neural networks like inputs to be proportional. Obviously, binary inputs are good – something either affects the prediction or doesn’t. But for variable inputs, the variation must make sense in terms of the problem you are solving. Using the power output of a pump as an input to a neural network makes sense. Using the model number of that pump, as an integer, wouldn’t make sense unless there is happenstancial relationship between the serial number and some function meaningful to your process. Going back to my board description above, I could theoretically describe the “power” of my piece with a number between 1 and 10 (as an example), but any errors in my ability to accurately rank my pieces contribute to prediction errors. So is a Queen worth six times a pawn or nine? Get that wrong, and my neural net training has an inaccuracy built in right up front. And, by the way, that means “worth” to the neural net, not to me or other human players.

A much better way to represent a chess game to a mathematical “intelligence” is to describe the pieces. So, for example, each piece could be described with two inputs, describing its deviation from that piece’s starting position in the X and Y axes, with perhaps a third node to indicate whether the piece is on the board or captured. My starting board then becomes, by definition, 96 zeros, with numbers being populated (and generally growing) as the pieces move. It’s not terribly bigger (although rather horrifyingly so to my 90s self) than the representation by board, and I could easily get them on par by saying, for example, that pieces captured are moved elsewhere on the board, but well out of the 8X8 grid. Organizing by the pieces, though, is both non-intuitive for we human chess players and, in general, would seem less efficient in generalizing to other games. For example, if I’m modelling a card game (as I talked about in my previous post), describing every card, and each of their possible positions; that  is a much bigger data set than just describing what is in each hand and on the table. But, again, it should be clear that the description of the board going to be considerably less meaningful as a mathematical entity than the description creating working from the game piece.

At this point, it is worth remembering again that this is no longer 1992. I briefly mentioned the advances in computing, both in power and in structure (the GPU architecture as superior for solving matrix math). That, in turn, has advanced the state of the art in neural network design and training. The combination goes a long way in explaining why image recognition is once again eyed as a problem for neural networks to address.

Consider the typical image. It is a huge number of pixels of (usually) highly-compressible data. But compressing the data will, as described above, befuddle the neural network. On the other hand, those huge, sparse matrices need representative training data to evenly cover the huge number of inputs, with that need increasing geometrically. It can quickly become, simply, too much of a problem to solve in a timely manner no matter what kind of computing power you’ve got to throw at it. But with that power, you can do new and interesting things. A solution for image recognition is to use “convolutional” networks.

Not to try to be too technically correct, I’ll try to capture the essence of this technique. The idea is that the input space can be broken up into sub-spaces (in an image, small fractions of the image), that then feed a significantly smaller neural network. Then, one might assume that those small networks are all the same or similar to each other. For an image recognition, we might train 100s or even 1000s of networks operating on 1% of the image (in overlapping segments), creating a (relatively) small output based on the large number of pixels. Then those outputs feed a whole-image network. It is still a massive computational problem, but immensely smaller than the problem of training a network processing the entire image as the input.

Does that make a chess problem solvable? It should help, especially if you have multiple convolutional layers. So there might be a neural network that describes each piece (6 inputs for old/new position (2D) plus on/off board) and reduces it to maybe 3 outputs. A second could map similar pieces.. where are the bishops? where are the pawns? Another sub-network, repeated twice, could try just looking at one player at a time. It is still a huge problem, but I can see that it’s something that is becoming solvable given some time an effort.

Of course, this is Alphabet, Inc we are talking about. They’ve got endless supplies of (computing) time and (employee) effort, so if it is starting to look doable to a mere human like me, it is certainly doable for them.

At this point, I went back to the research paper wherein I discovered that some of my intuition was right, although I didn’t fully appreciate that last point. Just as a simple example, the input layer for the DeepMind system is to represent each piece as a board showing the position of the piece. So 32X a 64-by-64 positional grid. They also use a history of turns, not just current and next turn. It is orders-of-magnitude more data than I anticipated, but in extremely sparse data sets. In fact, it looks very much like image processing, but with much more ordered images (to a computer mind, at least). The paper states they are using Tensor Processing Units, a Google concoction meant to use hardware having similar advantages to the GPU and it’s matrix-math specialization, but further optimized specifically to solve this kind of neural network training problem.

So lets finally go back to the claim that got all those singularity-is-nigh dreams dancing in the heads of internet commentators. The DeepMind team were able to train in a matter of (really) twenty-four hours a superhuman level chess player with no a priori chess knowledge. Further, the paper states that the training set consists of 800 randomly-generate games (constrained only to be made up of legal moves), which seems like an incredibly small data set. Even realizing how big those representations are (with their sparse descriptions of the piece locations as well as per-piece historical information), it all sounds awfully impressive. Of course, that is 800 games per iteration. If I’m reading right, that might be 700k iterations in over 9 hours using hardware nearly inconceivable to we mortals.

And that’s just the end result of a research project that took how long? To get to that point where they could hit the “run” button took certainly months, and probably years.

First you’ve got to come up with the data format, and the ability to generate games in that format. Surprisingly, the paper says that the exact representation wasn’t a significant factor. I suppose that it an advantage of its sparseness. Next, you’ve got to architect that neural net. How many convolutions over what subsets? How many layers? How many nodes? That’s a huge research project, and one that is going to need huge amounts of data – not the 800 randomly generated games you used at the end of it all.

The end result of all this – after a process involving a huge number of PhD hours and petaFLOPS of computational power – you’ve created a brain that can do one thing; learn about chess games. Yes, it is a brain without any knowledge in it – a tabula rasa – but it is a brain that is absolutely useless if provided knowledge about anything other than playing chess.

It’s still a fabulous achievement, no doubt. It is also research that is going to be useful to any number of AI learning projects going forward. But what it isn’t is any kind of demonstration that computers can out-perform people (or even mice, for that matter) in generic learning applications. It isn’t a demonstration that neural nets are being advanced into the area of general learning. This is not an Artificial Intelligence that that could be, essentially, self-teaching and therefore life-like in terms of its capabilities.

And, just to let the press know, it isn’t the end of the world.

The Nature of my Game

I watched with glee while your kings and queens
fought for ten decades for the gods they made.

On October 19th, 1453 the French army entered Bordeaux. The province of Gascony, with Bordeaux as its capital, had been a part of England since the marriage, in 1152, of Eleanor of Aquitaine to the soon-to-be King Henry II of England. Between 1429 and 1450, the Hundred Years War had seen a reversal of English fortunes and a series of French victories which began under the leadership of Jeanne D’Arc and culminated in the French conquest and subsequent control of Normandy.

Following the French victory in Normandy, a 3 year struggle for the control of Gascony began. As the fight went on, it saw dominance shift from the English to the French – and then back again after the arrival of John Talbot, earl of Shrewsbury. Eventually Talbot succumbed to numbers and politics, leading his army against a superior French position at Castillon on July 17th of 1453. He and his son both paid with their lives and, with the defeat of the English army and the death of its leadership, the fall of all of Gascony and Bordeaux inevitably followed before the year’s end.

The Battle of Castillon is generally cited as the end of the Hundred Years War, although the milestone is considerably more obvious in retrospect than it was at the time. The English defeat did not result in a great treaty or the submission of one king to another. England simply no longer had control of her French territories, save for Calais, and in fact never would again. Meanwhile, the world was moving on to other conflicts. The end of the Hundred Years War, in turn, is often cited as a key marker for the end of the (Late) Medieval period and the transition to the (Early) Modern period. Another major event of 1453, the Fall of Constantinople, is also a touchstone for delineating the transition to the modern world. Whereas the Hundred Years War marked a shift from the fragmented control by feudal fiefdoms to ever-more centralized nation states, the Fall of Constantinople buried the remains of the Roman Empire. In doing so, it saw the flight of Byzantine scholars from the now-Ottoman Empire to the west, and particularly to Italy. This produced a symbolic shift of the presumptive heir of the Roman/Greek foundations of Western Civilization to be centered, once again, on Rome.

The term “Renaissance” refers to the revival of classical Greek and Roman thought – particularly in art, but also in scholarship and civics. Renaissance scholarship held as a goal an educated citizenry with the skills of oration and writing sufficient to positively engage in public life. Concurrent to the strides in the areas of art and architecture, which exemplify the period, were revolutions in politics, science, and the economy. The combination of the creation of a middle class, through the availability clerical work, and the emphasis on the value of the individual, helped drive the nail into the coffin of Feudalism.

The designer for the boardgame Pax Renaissance references 1460 as the “start date” for the game, which lasts through that transitional period (roughly 70 years). Inherent in the design of the game, and expounded upon in the manual, is the idea that what drove the advances in art, science, technology, and government was transition to a market economy. That transition shifted power away from the anointed nobility and transferred it to the “middle class.” The game’s players immerse themselves in the huge changes that took place during this time. They take sides in the three-way war of religion, with Islam, Catholicism and the forces of the reformation fighting for men’s souls. The game simulates the transition of Europe from feudalism to modern government; either the nation state empires or the republic. Players also can shift the major trade routes, refocusing wealth and power from the Mediterranean to northwestern Europe.

Technically speaking, I suppose Pax Renaissance is not a boardgame, because there is no board. It is a card game, although it stretches that definition as well. Some of the game’s cards are placed on the table to form a mapboard of Europe and the Mediterranean, while others serve a more straight-forward “card” function – set down from the players’ hand onto the table in front of them. The game also has tokens that are deployed to the cards and then moved from card to card (or perhaps to the space between cards.). While this starts to resemble the more typical board game, if you continue to see the game in terms of the cards, the tokens can be interpreted to indicate additional states beyond the usual up-or-down (and occasionally rotated-sideways) that cards convey.

Thinking about it this way, one might imagine that it drew some of its inspiration a card game like Rummy – at least the way I learned to play Rummy. In that variant, players may draw from the discard pile, with deeper selections into the pile having an increased cost (of holding potentially negative-point cards in your hand). Once collected, cards remain in the hand or are played in front of the player. Of course, this doesn’t really map one-to-one. A unique point in Pax Renaissance is that there are no secret (and necessarily random) draws directly to the players’ hands. Instead, new cards are dealt into the “market,” the openly-visible and available pile, usually to a position that is too expensive to be accessed immediately, giving all players visibility several turns in advance.

Thus the game has no random component, assuming that one allows that the differing deck order (and content – not every card is used in every game) could be thought of as different “boards” as opposed to a random factor. So rather than a checkers or a chess with its square grid, it is a version where there are many 1000s of variations in the board shape and initial setup. Stretching the analogy to its breaking point, that variable board may have also a “fog of war,” as the playing space is slowly revealed over the course of the game.

I don’t actually mean to equate Pax Renaissance with Rummy or chess, but rather to establish some analogies that would be useful when trying to develop a programmed opponent. The game is the third in a “Pax” series from the designer, and can easily be seen as a refinement to that system. Theme-wise, it is a follow-on to his game Lords of the Renaissance from 20 years earlier. That title is a far more traditional map-and-counter game on the same subject, for 12 (!!!) players.

However, I’d like to look at this from an AI standpoint, and so I’ll use the comparison to checkers.

Since the “board” is revealed to all players equally (albeit incrementally) there is no hidden knowledge among players. Aside from strategy, what one player knows they all know. Given that factor, one supposes that victory must go to the player who can think more moves ahead than their opponents can.

I recently read an article about the development of a checkers artificial intelligence. The programmer in this tale took on checkers after his desire to build a chess intelligence was made obsolete by the Deep Blue development efforts in the early 1990s. It was suggested to him that he move to checkers and he quickly developed a top-level player in the form of a computer algorithm. His solution was to attack the problem from both sides. He programed the end-game, storing every possible combination of the remaining pieces and the path to victory from there. He also programmed a more traditional look-ahead algorithm, starting from a full (or nearly so) board and analyzing all the permutations forward to pick the best next move. Ultimately, his two algorithms met in the middle, creating a system that could fully comprehend every possible move in the game of checkers.

Checkers, as a target game for the development of AI, had two great advantages. First, it is a relatively simple game. While competitive, world-class play obviously has great depth, most consider a game of checkers to be fairly easy and casual. The board is small (half the spaces, functionally speaking, as chess) and the rules are very simple. There are typically only a handful of valid moves given any checkers board setup, versus dozens of valid moves in chess. Secondly, the number of players is large (who doesn’t know how to play), and thus the knowledge about what strategies to use is known, even if not quite as well-analyzed as with Chess. Thus, in a checkers game an AI can begin its work by using a “book.” That is, it uses a database of all of the common and winning strategies and their corresponding counter-strategies. If a game begins by following the path of an already-known game, the programmed AI can proceed down that set of moves.

At least until one player decides its fruitful to deviate from that path.

After that, in the middle part of the game, a brute force search can come into play. Note that this applies to a programmed opponent only until the game is “solved”, as described in the article. Once the database has every winning solution from start to end, a search over the combinations of potential moves isn’t necessary. But when it is used, the AI searches all combinations of moves from the current position, selecting its best current turn move based on what the (human) opponent is likely to do. At its most basic, this problem is often considered with a minimax algorithm. This is an algorithm that makes an assumption that, whatever move you (thinking of yourself as the algorithm) make, your opponent will counter with the move least advantageous to you. Therefore, to find the best move for yourself, you alternately search for the best move you can make and then the worst move your opponent can make (the minimum and then maximum ranked choices) to determine the end state for any current turn move.

Wikipedia has a description, with animated example, of how such a search works using a technique to avoid searching fruitless branches of the tree. That inspired me to take a look at Pax Renaissance and do a similar evaluation of the choices one has to make in that game.

 

A smallish example of an animated game tree.

I’m following the color coding of the Wikipedia example, although in the above screenshot it’s not as clear as it should be. First of all, not everything is working correctly. Second, I took a screen shot while it is actively animating. The coloring of the nodes is done along with the calculations and, as the tree is expanded and/or pruned, the existing branches are shifted around to try to make things legible. It looked pretty cool when it was animating. Not quite so cool to watch once I upped the number of nodes by a factor of ten or so from what is displayed in the above diagram.

I’m assuming a two-player game. The actual Pax Renaissance is for 2-4 players, but initially I wanted to try to be as much like the “textbook” example as I could. The coloring is red for a pruned/unused branch and yellow for an active or best branch. The cyan block is the one actively being calculated, and the blue means a block that his been “visited,” but it has not yet completed evaluation. The numbers in each block are the best/worst heuristic at the leaf of each branch, which is four plies down (two computer turns and two opponent turns). Since at each layer the active player is assumed to choose the best move for them, the value in a circle should be the lowest value of any square children and the square’s should the highest value of any circular children.

The value is computed by a heuristic, potentially presenting its own set of problems. On one hand, the heuristic is a comparison between the two players. So if the computer has more money, then the heuristic comes out positive. If the opponent has more money, then the heuristic comes out negative, with that value being the difference between the two players’ bank accounts. In that sense, it is much easier than, say, positional elements on the chess board, because each evaluation is symmetrical. The hard part is comparing the apples to the oranges. A determination is needed much like the “points” assigned to pieces in chess. Beginning chess players learn that a rook with worth 5 pawns. But how much is a “Coronation Card” worth in florins? Perfecting a search algorithm means both getting the algorithm working and implementing that “domain knowledge,” the smarts about the balance among components, within the mathematical formulas of the search.

As I said, this was an early and simple example.  To build this tree, I assumed that both players are going start the game being frugal in their spending, and therefore use their first turn to buy the cheapest two cards. A the turns advance, they look at the combinations of playing those cards and buying more. Even in this simple example, I get something like 4000 possible solutions. In a later attempt (as I said, it starts looking pretty cluttered), I added some more game options and produced a tree of 30,000 different results. Remember, this is still only two turns and, even within those two turns, it is still a subset of moves. Similar to chess and checkers, as the initial moves are complete, the number of possibilities grows as the board develops.

At this point, I need to continue building more complete trees and see how well and efficiently they can be used to determine competitive play for this game. I’ll let you know if I find anything.

Cold War Chess

On May 16th, 1956, the newly constituted Republic of Egypt under the rule of Gamal Abdel Nasser recognized the communist People’s Republic of China.

Egypt had broken from British rule in 1952 with the Free Officers Movement and their coup which ended the Egyptian monarchy. The influence of the military, and particularly Nasser, shifted to more involvement in the political. Nasser and the other officers ruled through a Revolutionary Command Council and, over the next few years, eliminated political opposition. Nasser became chairman of he Revolutionary Command Council and by 1954 was largely himself ruled Egypt.

In the run up to the 1952 coup, Nasser had cultivated contacts with the CIA. His purpose was to provide a counter balance to the British, should they attempt to oppose the Free Officers in their takeover. The U.S. came to see Nasser as an improvement over the deposed King Farouk and looked to his support in the fight against communism. Nasser himself promoted pan-Arab nationalism which concerned itself largely with the perceived threat from the newly-formed State of Israel. Nasser also became a leader of the newly-independent third world countries, helping create the policy of “neutralism,” having the rising powers of the third world remain unaligned in the Cold War.

It was within this context that the recognition of China appeared to be so provocative.

Egypt had begun drifting towards the communist camp due to a frustration with terms of arms sales and military support from the Western powers. A major weapons deal with the USSR to purchase Czechoslovakian weapons in 1955 greatly enhanced Egypt’s profile in the region, and put them on an even military setting with Israel.

When Nasser recognized China, the response from the U.S. was a counter punch; withdrawing financial support for the Aswan Dam project, itself conceived as a mechanism for securing Egypt’s support on the anti-communist side of the Cold War. U.S. officials considered it a win-win. Either they would bend Nasser to their will, and achieve better compliance in the future, or he would be forced to go to the Soviets to complete the Aswan Dam. They figured that such a project was beyond the financial capabilities of the Russians, and the strain would hamper the Soviet economic and military capabilities enough to more than make up for the deteriorated relations with Egypt. In that event, the ultimate failure of the project would likely realign Egypt with the U.S. anyway.

Egypt’s response continued to surprise. Despite having negotiated that the UK turn over control of the Suez Canal to Egypt, on July 26th, 1956, Nasser announced the nationalization of the Suez Canal and used the military to expel the British and seize control over its operation.

Known Bugs – Arab Israeli Wars

Arab Israeli Wars

Version 0.1.3.0. (April 22nd 2017)

  1. If you move a vehicle, and still have additional transfers left, but don’t want to use them, the system may wait forever for you to make another transfer. This doesn’t always happen, but if it does, moving a unit around within the same front will usually result in the prompt.
  2. If multiple informational displays are show simultaneously and overlapped, the hide button may show through the upper card from the lower card.

Ain’t she a beautiful sight?

There was armored cars, and tanks, and jeeps,
and rigs of every size.

Twenty-eight years after the Jerusalem riots saw the beginning of Operation Nachshon. The Operation was named for the Biblical prince Nachshon, who himself received the name (meaning daring, but it also sounds similar to the word for “stormy sea waves”) during the Israelites exodus from Egypt. According to one text, when the Israelites first reached the Red Sea, the waters did not part before them. As the people argued on the sea’s banks about whom would lead them forward, Nahshon entered the waters. Once he was up to his nose in the water, the sea parted.

Operation Nachshon was conceived to open a path between Tel Aviv and Jerusalem to deliver supplies and ammunition to a besieged Jerusalem, cut off from the coast as the British withdrew from Palestine. The road to Jerusalem led through land surrounded by Arab controlled villages, from which Palestinian militia (under the command of Abd al-Qadir al-Husayni) could ambush Israeli convoys attempting to traverse the route.

The operation started on April 5th with attacks on Arab position and, in the pre-dawn hours on April 6th a convoy arrived in Jerusalem from Tel-Aviv. During the operation, the Israelis successfully captured or reduced more than a dozen villages, and took control of the route. Several more convoys made it into Jerusalem before the end of the operation on April 20th.

Operation Nachshon was also the first time Jewish forces attempted to take and hold territory, as opposed to just conducting raids.

Today also marks a first for A Plague of Frogs. We are delivering, for free download, a PC game depicting the Arab Israeli War of 1948. Click for rules, download link, and other details.