10/11/2016 08:22

Actual Reality Game

This morning The Guardian published a feature based on Tim Harford’s book Messy. https://www.theguardian.com/technology/2016/oct/11/crash-how-computers-are-setting-us-up-disaster

I remember discussing the paradox of automation with the programmers at St. Vincent hospital about 25 years ago. The phrase wasn't in use but the concept was. I'm trying to remember the phrases for automation which flies the plane until it fails -- the Airbus 330 -- and for the alternative where the pilot flies the plane and the automation watches for mistakes. That was during the short moment when I had my own title and I instructed the developers that we would use the latter approach.

An alternative solution is to reverse the role of computer and human. Rather than letting the computer fly the plane with the human poised to take over when the computer cannot cope, perhaps it would be better to have the human fly the plane with the computer monitoring the situation, ready to intervene. Computers, after all, are tireless, patient and do not need practice. Why, then, do we ask people to monitor machines and not the other way round?

What puzzles me is how this could be a very public debate for over a quarter of a century and still appear in a feature story this morning as an unresolved problem. It isn't that I myself am so brilliantly in advance of technical philosophy; I simply noticed the debate about how to design airplanes and transferred it to the less dangerous realm of payments and billing.

The paradox of automation, then, has three strands to it. First, automatic systems accommodate incompetence by being easy to operate and by automatically correcting mistakes. Because of this, an inexpert operator can function for a long time before his lack of skill becomes apparent – his incompetence is a hidden weakness that can persist almost indefinitely. Second, even if operators are expert, automatic systems erode their skills by removing the need for practice. Third, automatic systems tend to fail either in unusual situations or in ways that produce unusual situations, requiring a particularly skilful response. A more capable and reliable automatic system makes the situation worse.

This was all in the public conversation before I raised the issue to the hospital programmers. This Guardian article states it as clearly as I've ever seen and gives a wider variety of examples, but nothing is new. I mean conceptually; there is a lot of new application of the bad design to other areas, but the bad concept is the same concept as in the 1990s. Where I transferred the debate to another realm, the rest of the world seems to have transferred the error to other realms without the debate. For 25 years of advancing technology.

Psychologist Gary Klein is right to say, "People get passive and less vigilant when algorithms make the decisions." What that misses is that people weren't being more vigilant before the algorithms were making decisions. They simply were defaulting to some other decider, such as what they think everybody else does or what the TV commercials told them, or the way their parents ran their families (even if the family they grew up in was disfunctional). We still do all those things, too.

And so, because we have always made decisions by deferring to someone else, we continue to do so with automation.


Links