The Design of Future Things (10 page)

One way for the system to communicate its goals and intentions to a person is through an explicit presentation of the strategy
that is being followed. One research group, Christopher Miller and his colleagues, proposes that systems share a “playbook” with everyone involved. The group describes their work as “based on a shared model of the tasks in the domain. This model provides a means of human-automation communication about plans, goals, methods and resource usage—a process akin to referencing plays in a sports team's playbook. The Playbook enables human operators to interact with subordinate systems with the same flexibility as with well-trained human subordinates, thus allowing for adaptive automation.” The idea is that the person can convey intentions by selecting a particular playbook for the automatic systems to follow, or if the automation is in control, it shows the playbook it has selected. These researchers are concerned with the control of airplanes, so the playbook might specify how it will control take off and the achievement of cruising altitude. Whenever the machine is working autonomously, controlling what is happening, it always displays the play that it is following, letting the human understand how the immediate actions fit into the overall scheme and change the choice of plays if necessary. A critical component here is the form by which the play is shown. A written description or a list of planned actions is not likely to be acceptable, requiring too much effort to process. For the playbook approach to be effective, especially for everyday people who do not wish to undergo training to accommodate the intelligent objects in their homes, a simple means of displaying the plays is essential.

I've seen similar concepts at work on the displays of large commercial copiers, where the display clearly shows the “playbook”
being followed: perhaps 50 copies, duplex, two-sided copying, stapled, and sorted. I have seen nice graphical depictions, with the image of a piece of paper turning over, showing printing on both sides and how the printed page is combined with other pages so that it is easy to tell if it has been aligned properly, with the page flipped along the short edge or the long one, and with a depiction of the final stapled documents stacked up neatly in a pile, with the height of the pile showing how far the job has progressed.

When automation is operating relatively autonomously under loose-rein conditions, display schemes similar to the playbook are especially relevant to allow people to determine just what strategy the machine is following and how far along it is in its actions.

The Bicycles of Delft

Delft is a charming small town near the Atlantic coast of the Netherlands, home of the Technische Universiteit Delft, or in English, the Delft University of Technology. The streets are narrow, with several major canals encircling the business district. The walk from the hotel section to the university is picturesque, meandering past and over canals, through the narrow winding streets. The danger comes not from automobiles but from the swarms of bicycles, weaving their way at great speeds in all directions and, to my eyes, appearing out of nowhere. In Holland, bicycles have their own roadways, separate from the roads and pedestrian paths. But not in the central square of Delft. There, bicyclists and pedestrians mix.

F
IGURE
3.3

Holland is the land of multiple bicycles, which, although environmentally friendly, present a traffic hazard to people trying to walk across the square. The rule is: Be predictable. Don't try to help the bicyclists. If you stop or swerve, they will run into you.

(Photograph by the author.)

“It's perfectly safe,” my hosts kept reassuring me, “as long as you don't try to help out. Don't try to avoid the bikes. Don't stop or swerve. Be predictable.” In other words, maintain a steady pace and a steady direction. The bicyclists have carefully calculated their course so as to miss one another and all the pedestrians under the assumption of predictability. If pedestrians try to outmaneuver the bicyclists, the results will be disastrous.

The bicyclists of Delft provide a model for how we might interact with intelligent machines. After all, here we have a person, the walker, interacting with an intelligent machine, a bicycle. In this case, the machine is actually the couplet of bicycle+person, with the person providing both the motive power
and the intelligence. Both the person walking and the bicycle+person have the full power of the human mind controlling them; yet, these two cannot coordinate successfully. The combination bicycle+person doesn't lack intelligence: it lacks communication. There are many bicycles, each traveling quite a bit faster than the pace of the walker. It isn't possible to talk to the bicyclists because, by the time they are close enough for conversation, it is too late to negotiate. In the absence of effective communication, the way to interact is for the person walking to be predictable so that no coordination is required: only one of the participants, the bicycle+person has to do planning; only one has to act.

This story provides a good lesson for design. If a person cannot coordinate activities with an intelligent, human-driven machine, the bicycle+person, why would we ever think the situation would be any easier when the coordination must take place with an intelligent machine? The moral of this story is that we shouldn't even try. Smart machines of the future should not try to read the minds of the people with whom they interact, either to infer their motives or to predict their next actions. The problem with doing this is twofold: first, they probably will be wrong; second, doing this makes the machine's actions unpredictable. The person is trying to predict what the machine is going to do while, at the same time, the machine is trying to guess the actions of the person—a sure guarantee of confusion. Remember the bicycles of Delft. They illustrate an important rule for design: be predictable.

Now comes the next dilemma: which should be the predictable element, the person or the intelligent device? If the two elements were of equal capability and equal intelligence, it
wouldn't matter. This is the case with the bicyclists and pedestrians. The intelligence of both comes from human beings, so it really doesn't matter whether it is the bicyclists who are careful to act predictably or the pedestrians. As long as everyone agrees who takes which role, things will probably work out okay. In most situations, however, the two components are not equal. The intelligence and general world knowledge of people far exceeds the intelligence and world knowledge of machines. People and bicyclists share a certain amount of common knowledge or common ground: their only difficulty is that there is not sufficient time for adequate communication and coordination. With a person and a machine, the requisite common ground does not exist, so it is far better for the machine to behave predictably and let the person respond appropriately. Here is where the playbook idea could be effective by helping people understand just what rules the machine is following.

Machines that try to infer the motives of people, that try to second-guess their actions, are apt to be unsettling at best, and in the worst case, dangerous.

Natural Safety

The second example illustrates how accident rate can be reduced by changing people's perception of safety. Call this “natural” safety, for it relies upon the behavior of people, not safety warnings, signals, or equipment.

Which airport has fewer accidents: an “easy” one that is flat, with good visibility and weather conditions (e.g., Tucson, in the Arizona desert) or a “dangerous” one with hills, winds, and a difficult approach (e.g., San Diego, California, or Hong Kong)?
Answer—the dangerous ones. Why? Because the pilots are alert, focused, and careful. One of the pilots of an airplane that had a near crash while attempting to land at Tucson told NASA's voluntary accident reporting system that “the clear, smooth conditions had made them complacent.” (Fortunately, the terrain avoidance system alerted the pilots in time to prevent an accident. Remember the first example that opened
chapter 2
, where the plane said, “Pull up, Pull up,” to the pilots? That's what saved them.) The same principle about perceived versus real safety holds with automobile traffic safety. The subtitle of a magazine article about the Dutch traffic engineer Hans Monderman makes the point: “Making driving seem more dangerous could make it safer.”

People's behavior is dramatically impacted by their perception of the risk they are undergoing. Many people are afraid of flying but not of driving in an automobile or, for that matter, being struck by lightning. Well, driving in a car, whether as driver or passenger, is far riskier than flying as a passenger in a commercial airline. As for lightning, well, in 2006 there were three deaths in U.S. commercial aviation but around fifty deaths by lightning. Flying is safer than being out in a thunderstorm. Psychologists who study perceived risk have discovered that when an activity is made safer, quite often the accident rate does not change. This peculiar result has led to the hypothesis of “risk compensation”: when an activity is changed so that it is perceived to be safer, people take more risks, thereby keeping the accident rate constant.

Thus, adding seat belts to cars, or helmets to motorcyclists, or protective padding to football uniforms, or higher, better fitting boots for skiers, or antiskid brakes and stability controls to automobiles leads people to change their behavior to keep risk the
same. The same principle even applies to insurance: If they have insurance against theft, people aren't as careful with their belongings. Forest rangers and mountaineers have discovered that providing trained rescue squads has the tendency to increase the number of people who risk their lives because they now believe that if they get into trouble, they will be rescued.

Risk homeostasis
is the term given to this phenomenon in the literature on safety.
Homeostasis
is the scientific term for systems that tend to maintain a state of equilibrium, in this case, a constant sense of safety. Make the environment appear safer, goes this hypothesis, and drivers will engage in riskier behavior, keeping the actual level of safety constant. This topic has been controversial since it was first introduced in the 1980s by the Dutch psychologist Gerald Wilde. The controversy surrounds the reasons for the effect and its size, but there is no doubt that the phenomenon itself is real. So, why not put this phenomenon to use in reverse? Why not make things safer by making them look more dangerous than they actually are?

Suppose that we got rid of traffic safety features: no more traffic lights, stop signs, pedestrian crossings, wider streets, or special bike paths. Instead, we might add roundabouts (traffic circles) and make streets narrower. The idea seems completely crazy; it reverses common sense. Yet, it is precisely what the Dutch traffic engineer Hans Monderman advocates for cities. Proponents of this method use the name “Shared Space” to describe their work with several successful applications across Europe: Ejby in Denmark, Ipswich in England, Ostende in Belgium, Makkinga and Drachten in the Netherlands. This philosophy does not change the need for signals and regulations on high-speed highways, but in small towns and even in restricted districts within large cities,
the concept is appropriate. The group reports that in London, England, “Shared Space principles were used for the redesigning of the busy shopping street Kensington High Street. Because of the positive results (a 40% reduction in road accidents) the city council are going to apply Shared Space in Exhibition Road, the central artery in London's most important museum district.” Here is how they describe their philosophy:

 

Shared Space. That is the name of a new approach to public space design that is receiving ever-wider attention. The striking feature is the absence of conventional traffic management measures, such as signs, road marking, humps and barriers, and the mixing of all traffic flows. “Shared Space gives people their own responsibility for what ‘their' public space will look like and how they are going to behave in it,” says Mr. Hans Monderman, head of the Shared Space Expert Team.

“The traffic is no longer regulated by traffic signs, people do the regulating themselves. And precisely that is the whole idea. Road users should take each other into account and return to their everyday good manners. Experience shows that the additional advantage is that the number of road accidents decreases in the process.”

This concept of reverse risk compensation is a difficult policy to follow, and it takes a courageous city administration. Even though it might reduce accidents and fatalities overall, it can't prevent all accidents, and as soon as there is one fatality, anxious residents will argue for warning signs, traffic lights, special pedestrian
paths, and widening of the streets. It is very difficult to sustain the argument that if it looks dangerous, it may actually be safer.

Other books

Personal Demon by Kelley Armstrong
Intimate Persuasions by Nicole Morgan
Snowman by Norman Bogner
Shrinking Violet by Danielle Joseph
Gluten-Free Makeovers by Beth Hillson
Deliver Me From Evil by Mary Monroe
Hollowed by Kelley York
True Valor by Henderson, Dee
Six Sagas of Adventure by Ben Waggoner (trans)