Tuesday, November 19, 2019

Relying on AI

There's lots of talk nowadays about the future of Artificial Intelligence (AI) and how computerized robots will eventually reach a stage of development in which they are essentially human, but much smarter.

Whether that future will ever be realized or not I doubt that anyone knows, though I confess to being skeptical that the characteristics of consciousness will ever be replicated in a machine. Nevertheless, I admit I could be wrong.

One thing that seems certain, however, is that AI has not yet arrived at that future. Robert Marks, Director and Senior Fellow at the Walter Bradley Center for Natural and Artificial Intelligence in a piece at Mind Matters, lists several events that illustrate that the abilities of AI are still dependent upon the abilities of human programmers.

Marks discusses a half dozen or so examples, but here are two of the more interesting - and tragic, even near catastrophic - failures of AI that he mentions:
An Uber self-driving car killed a pedestrian in 2018: “According to data obtained from the self-driving system, the system first registered radar and LIDAR observations of the pedestrian about six seconds before impact, when the vehicle was traveling at 43 mph… As the vehicle and pedestrian paths converged, the self-driving system software classified the pedestrian as an unknown object, as a vehicle and then as a bicycle with varying expectations of future travel path.

At 1.3 seconds before impact, the self-driving system determined that an emergency braking maneuver was needed to mitigate a collision.” By then, it was too late.
This next one occurred when AI could be said to have been in its infancy, but it's nonetheless frightening to think how close we came to a world-wide holocaust due to our reliance on machines:
During the Cold War, the Soviet Union constructed Oko, a system tasked with early detection of a missile attack from the United States. Oko detected such an attack on September 26, 1983. Sirens blared and the system declared that an immediate Soviet retaliatory strike was mandatory.

A Soviet officer in charge felt that something was not right and did not launch the retaliatory strike. His decision was the right one. Oko had mistakenly interpreted sun reflections off of clouds as inbound American missiles. By making this decision, the Soviet officer, Lt. Col. Stanislov Petrov, saved the world from thermonuclear war.
Of course, when mistakes and oversights are discovered programs can be rewritten so as to avoid them in the future, but, as Marks observes,
The cost of discovering an unexpected contingency can, however, be devastating. A human life or a thermonuclear war is too high a price to pay for such information. And even after a specific problem is fixed, additional unintended contingencies can continue to occur.
He notes that there are three ways to minimize unintended consequences:
(1) use systems with low complexity, (2) employ programmers with elevated domain expertise, and (3) testing. Real world testing can expose many unintended consequences but hopefully without harming anyone.

For AI systems, low complexity means narrow AI. AI thus far, when reduced to commercial practice, has been relatively narrow. As the conjunctive complexity of a system grows linearly, the number of contingencies grows exponentially. Domain expertise [on the part of the programmers] can anticipate many of these contingencies and minimize those which are unintended.
But, as the examples he gives in his column illustrate, even the best of programmers can't anticipate everything. The greatest threat, Marks concludes, "is the unintended contingency, the thing that never occurred to the programmer."