Artificial, Yes, but Intelligent?

Keep your eyes on the road, your hands upon the wheel.

When I was in college, only one of my roommates had a car. The first time it snowed, he expounded upon the virtues of finding an empty and slippery parking lot and purposely putting your car into spins.  “The best thing about a snow storm,” he said. At the time I thought he was a little crazy. Later, when I had the chance to try it, I came to see it his way. Not only is it much fun to slip and slide (without the risk of actually hitting anything), but getting used to how the car feels when the back end slips away is the first step in learning how to fix it, should it happen when it actually matters.

Recently, I found myself in an empty, ice-covered parking lot and, remembering the primary virtue of a winter storm, I hit the gas and yanked on the wheel… but I didn’t slide. Instead, I encountered a bunch of beeping and flashing as the electronic stability control system on my newish vehicle kicked in. What a disappointment it was. It also got me thinkin’.

For a younger driver who will almost never encounter a loss-of-traction slip condition, how do they learn how to recover from a slide or a spin once it starts? Back in the dark ages, when I was learning to drive, most cars were rear-wheel-drive with a big, heavy engine in the front. It was impossible not to slide around a little when driving in a snow storm. It was almost a prerequisite to going out into the weather to know all the tricks of slippery driving conditions. Downshifting (or using those number gears on your automatic transmission), engine breaking, and counter steering were all part of getting from A to B. As a result*, when an unexpectedly slippery road surprises me, I instinctively take my foot off the brakes/gas and counter-steer without having to consciously remember the actual lessons. So does a car that prevents sliding 95% of the time result in a net increase in safety, even though it probably makes that other 5% worse? It’s not immediately obvious that it does.

On the Road

I was reminded of the whole experience a month or so ago when I read about the second self-driving car fatality. Both crashes happened within a week or so of each other in Western states; the first in Arizona and the second in California. In the second crash, Tesla’s semi-autonomous driving function was in fact engaged at the time of the crash and the drivers hands were not on the wheel six seconds prior. Additional details do not seem to be available from media reports, so the actual how and why must remain the subject of speculation. In the first, however, the media has engaged in the speculation for us. In Arizona, it was an Uber vehicle (a Volvo in this case) that was involved and the fatality was not the driver. The media has also reported quite a lot that went wrong. The pedestrian who was struck and killed was jaywalking, which certainly is a major factor in her resulting death. Walking out in front of a car at night is never a safe thing to do, whether or not that car self-driving. Secondly, video was released showing the driver was looking at something below the dashboard level immediately before the crash, and thus was not aware of the danger until the accident occurred. The self-driving system itself did not seem to take any evasive action.

Predictably, the Arizona state government responded by halting the Uber self-driving car program. More on that further down, but first look at the driver’s distraction.

After the video showing such was released, media attention focused on the distracted-driving angle of the crash. It also brought up the background of the driver, who had a number of violations behind him. Certainly the issue of electronics and technology detracting from safe driving is a hot topic and something, unlike self-driving Uber vehicles, that most of us encounter in our everyday lives. But I wonder if this exposes a fundamental flaw in the self-driving technology?

It’s not exactly analogous to my snow situation above, but I think the core question is the same. The current implementation of the self-driving car technology augments the human driver rather then replaces him or her. In doing so, however, it also removes some of the responsibility from the driver as well as making him more complacent about the dangers that he may be about to encounter. The more that the car does for the driver, the greater the risk that the driver will allow his attention to wander rather that stay focused, on the assumption that the autonomous system has him covered. In the longer term, are there aspects of driving that the driver will not only stop paying attention to, but lose the ability to manage in the way a driver of a non-automated car once did?

Naturally, all of this can be designed into the self-driving system itself. Even if a car is capable of, essentially, driving itself over a long stretch of a highway, it could be designed to engage the driver every so many seconds. Essentially requiring unnecessary input from the operator can be used to make sure she is ready to actively control the car if needed. I note that we aren’t breaking new ground here. A modern aircraft can virtually fly itself, and yet some part of the design (plus operational procedures) are surely in place to make sure that the pilots are ready when needed.

As I said, the governmental response has been to halt the program. In general, it will be the governmental response the will be the biggest hurdle for self-driving car technology.

In the specific case of Arizona, I’m not actually trying to second guess their decision. Presumably, they set up a legal framework for the testing of self-driving technology on the public roadways. If the accident in question exceeded any parameters of that legal framework, then the proper response would be to suspend the testing program. On the other hand, it may be that the testing framework had no contingencies built into it, in which case any injuries or fatalities would have to be evaluated as they happen. If so, a reactionary legal response may not be productive.

I think, going forward, there is going to be a political expectation that self-driving technology should be flawless. Or, at least, perfect enough that it will never cause a fatality. Never mind that there are 30-40,000 motor vehicle deaths per year in the United States and over a million per year world wide. It won’t be enough that an autonomous vehicle is safer than than a non-autonomous vehicle; it will have to be orders-of-magnitude safer. Take, as an example, passenger airline travel. Despite a rate that is probably about 10X safer for aircraft over cars, the regulatory environment for aircraft is much more stringent. Take away the “human” pilot (or driver) and I predict the requirements for safety will be much higher than for aviation.

Where I’m headed in all this is, I suppose, to answer the question about when we will see self driving cars. It is tempting to see that as a technological question – when will the technology be mature enough to be sold to consumers? But it is more than that.

I recall see somewhere an example of “artificial intelligence” for a vehicle system. The example was of a system that detected a ball rolling across the street being a trigger for logic that anticipates there might be a child chasing that ball. A good example of an important problem to solve before putting an autonomous car onto a residential street. Otherwise, one child run down while he was chasing his ball might be enough for a regulatory shutdown. But how about the other side of that coin? What happens the first time a car swerves to avoid a non-existent child and hits an entirely-existent parked car? Might that cause a regulatory shutdown too?

Is regulatory shutdown inevitable?

Robo-Soldiers

At roughly the same time that the self-driving car fatalities were in the news, there was another announcement, even more closely related to my previous post. Video-game developer EA posted a video showing the results of a multi-disciplinary effort to train a AI player for their Battlefield 1 game (which, despite the name is actually the fifth version of the Battlefield series). The narrative for this demo is similar to that of Google’s (DeepMind) chess program. The training was created, as the marketing pitch says, “from scratch using only trial and error.” Without viewing it, it would seem to run counter to my previous conclusions, when I figured that the supposed generic, self-taught AI was perhaps considerably less than it appeared.

Under closer examination, however, even the minute-and-a-half of demo video does not quite measure up to the headline hype, the assertion that neural nets have learned to play Battlefield, essentially, on their own. The video explains that the training methods involves manually placing rewards throughout the map to try to direct the behavior of the agent-controlled soldiers.

The time frame for a project like this one would seem to preclude them being directly inspired by DeepMind’s published results for chess. Indeed, the EA Technical Director explains that it was earlier DeepMind work with Atari games that first motivated them to apply the technology to Battlefield. Whereas the chess example demonstrated ability to play chess at a world class level, the EA project demonstration merely shows that the AI agents grasp the basics of game play and not much more. The team’s near-term aspirations are limited; use of AI for quality testing is named as an expected benefit of this project. He does go so far as to speculate that a few years out, the technology might be able to compete with human players within certain parameters. Once again, a far cry from a self-learning intelligence poised to take over the world.

Even still, the video demonstration offers a disclaimer. “EA uses AI techniques for entertainment purposes only. The AI discussed in this presentation is designed for use within video games, and cannot operate in the real world.”

Sounds like they wanted to nip any AI overlord talk in the bud.

From what I’ve seen of the Battlefield information, it is results only. There is no discussion of the methods used to create training data sets and design the neural network. Also absent is any information on how much effort was put into constructing this system that can learn “on its own.” I have a strong sense that it was a massive undertaking, but no data to back that up. When that process becomes automated (or even part of the self-evolution of a deep neural network), so that one can quickly go from a data set to a trained network (quickly in developer time, as opposed to computing time), the promise of the “generic intelligence” could start to materialize.

So, no, I’m not made nervous that an artificial intelligence is learning how to fight small unit actions. On the other hand, I am surprised at how quickly techniques seem to be spreading. Pleasantly surprised, I should add.

While the DeepMind program isn’t open for inspection, some of the fundamental tools are publicly available. As of late 2015, the Google library TensorFlow is available in open source. As of February this year, Google is making available (still in beta, as far as I know) their Tensor Processing Unit (TPU) as a cloud service. Among the higher-profile uses of TensorFlow is the app DeepFake, which allows its users to swap faces in video. A demonstration compares the apps performance, using a standard desktop PC and about a half-an-hour’s training time to produce something comparable to Industrial Light and Magic’s spooky-looking Princess Leia reconstruction.

Meanwhile, Facebook also has a project inspired by DeepMind’s earlier Go neural network system. In a challenge to Google’s secrecy, the Facebook project has been made completely open source allowing for complete inspection and participation in its experiments. Facebook announced results, at the beginning of May, of a 14-0 record of their AI bot against top-ranked Go players.

Competition and massive-online participation is bound to move this technology forward very rapidly.

 

The future’s uncertain and the end is always near.

 

*To be sure, I learned a few of those lessons the hard way, but that’s a tale for another day.