Free Novel Read

The Checklist Manifesto Page 12


  We rose into the clouds. I could see the city fall away below us. We slowly climbed to twenty thousand feet. And that was when the DOOR FWD CARGO light went on. I'd forgotten that this was the whole point of the exercise. The first couple lines of the electronic checklist came up on the screen, but I grabbed the paper one just so I could see the whole thing laid out.

  It was, I noticed, a READ-DO checklist--read it and do it--with only seven lines. The page explained that the forward cargo door was not closed and secure and that our objective was to reduce the risk of door separation.

  This was just a simulation, I knew perfectly well. But I still felt my pulse picking up. The checklist said to lower the cabin pressure partially. Actually, what it said was, "LDG ALT selector"--which Boorman showed me is the cabin pressure control on the overhead panel--"PULL ON and set to 8000." I did as instructed.

  Next, the checklist said to descend to the lowest safe altitude or eight thousand feet, whichever is higher. I pushed forward on the yoke to bring the nose down. Boorman indicated the gauge to watch, and after a few minutes we leveled off at eight thousand feet. Now, the checklist said, put the air outflow switches on manual and push them in for thirty seconds to release the remaining pressure. I did this, too. And that was it. The plane didn't explode. We were safe. I wanted to give Boorman a high five. This flying thing is easy, I wanted to say.

  There were, however, all kinds of steps that the checklist had not specified--notifying the radio control tower that we had an emergency, for example, briefing the flight attendants, determining the safest nearby airport to land and have the cargo door inspected. I hadn't done any of these yet. But Boorman had. The omissions were intentional, he explained. Although these are critical steps, experience had shown that professional pilots virtually never fail to perform them when necessary. So they didn't need to be on the checklist--and in fact, he argued, shouldn't be there.

  It is common to misconceive how checklists function in complex lines of work. They are not comprehensive how-to guides, whether for building a skyscraper or getting a plane out of trouble. They are quick and simple tools aimed to buttress the skills of expert professionals. And by remaining swift and usable and resolutely modest, they are saving thousands upon thousands of lives.

  One more aviation checklist story, this one relatively recent. The incident occurred on January 17, 2008, as British Airways Flight 38 approached London from Beijing after almost eleven hours in the air with 152 people aboard. The Boeing 777 was making its final descent into Heathrow airport. It was just past noon. Clouds were thin and scattered. Visibility was more than six miles. The wind was light, and the temperature was mild despite the season--50 degrees Fahrenheit. The flight had been completely uneventful to this point.

  Then, at two miles from the airport, 720 feet over a residential neighborhood, just when the plane should have accelerated slightly to level off its descent, the engines gave out. First the right engine rolled back to minimal power, then the left. The copilot was at the controls for the landing, and however much he tried to increase thrust, he got nothing from the engines. For no apparent reason, the plane had gone eerily silent.

  He extended the wing flaps to make the plane glide as much as possible and to try to hold it on its original line of approach. But the aircraft was losing forward speed too quickly. The plane had become a 350,000-pound stone falling out of the air. Crash investigators with Britain's Air Accidents Investigation Branch later determined that it was falling twenty-three feet per second. At impact, almost a quarter mile short of the runway, the plane was calculated to be moving at 124 miles per hour.

  Only by sheer luck was no one killed, either on board or on the ground. The plane narrowly missed crashing through the roofs of nearby homes. Passengers in cars on the perimeter road around Heathrow saw the plane coming down and thought they were about to be killed. Through a coincidence of international significance, one of those cars was carrying British prime minister Gordon Brown to his plane for his first official visit to China. "It was just yards above our heads, almost skimming a lamppost as the plane came in very fast and very, very low," an aide traveling with the prime minister told London's Daily Mirror.

  The aircraft hit a grassy field just beyond the perimeter road with what a witness described as "an enormous bang." The nose wheels collapsed on impact. The right main landing gear separated from the aircraft, and its two right front wheels broke away, struck the right rear fuselage, and penetrated through the passenger compartment at rows 29 and 30. The left main landing gear pushed up through the wing. Fourteen hundred liters of jet fuel came pouring out. Witnesses saw sparks, but somehow the fuel did not ignite. Although the aircraft was totaled by the blunt force of the crash, the passengers emerged mostly unharmed--the plane had gone into a thousand-foot ground slide that slowed its momentum and tempered the impact. Only a dozen or so passengers required hospitalization. The worst injury was a broken leg.

  Investigators from the AAIB were on the scene within an hour trying to piece together what had happened. Their initial reports, published one month and then four months after the crash, were documents of frustration. They removed the engines, fuel system, and data recorders and took them apart piece by piece. Yet they found no engine defects whatsoever. The data download showed that the fuel flow to the engines had reduced for some reason, but inspection of the fuel feed lines with a boroscope--a long fiberoptic videoscope--showed no defects or obstructions. Tests of the valves and wiring that controlled fuel flow showed they had all functioned properly. The fuel tanks contained no debris that could have blocked the fuel lines.

  Attention therefore turned to the fuel itself. Tests showed it to be normal Jet A-1 fuel. But investigators, considering the flight's path over the Arctic Circle, wondered: could the fuel have frozen in flight, caused the crash, then thawed before they could find a trace of it? The British Airways flight had followed a path through territory at the border of China and Mongolia where the recorded ambient air temperature that midwinter day was -85 degrees Fahrenheit. As the plane crossed the Ural Mountains and Scandinavia, the recorded temperature fell to -105 degrees. These were not considered exceptional temperatures for polar flight. Although the freezing point for Jet A-1 fuel is -53 degrees, the dangers were thought to have been resolved. Aircraft taking Arctic routes are designed to protect the fuel against extreme cold, and the pilots monitor the fuel temperature constantly. Cross-polar routes for commercial aircraft opened in February 2001, and thousands of planes have traveled them without incident since. In fact, on the British Airways flight, the lowest fuel temperature recorded was -29 degrees, well above the fuel's freezing point. Furthermore, the plane was over mild-weathered London, not the Urals, when the engines lost power.

  Nonetheless, investigators remained concerned that the plane's flight path had played a role. They proposed an elaborate theory. Jet fuel normally has a minor amount of water moisture in it, less than two drops per gallon. During cold-air flights, the moisture routinely freezes and floats in the fuel as a suspension of tiny ice crystals. This had never been considered a significant problem. But maybe on a long, very smooth polar flight--as this one was--the fuel flow becomes so slow that the crystals have time to sediment and perhaps accumulate somewhere in the fuel tank. Then, during a brief burst of acceleration, such as on the final approach, the sudden increase in fuel flow might release the accumulation, causing blockage of the fuel lines.

  The investigators had no hard evidence for this idea. It seemed a bit like finding a man suffocated in bed and arguing that all the oxygen molecules had randomly jumped to the other end of the room, leaving him to die in his sleep--possible, but preposterously unlikely. Nonetheless, the investigators tested what would happen if they injected water directly into the fuel system under freezing conditions. The crystals that formed, they found, could indeed clog the lines.

  Almost eight months after the crash, this was all they had for an explanation. Everyone was anxious to do something before a
similar accident occurred. Just in case the explanation was right, the investigators figured out some midflight maneuvers to fix the problem. When an engine loses power, a pilot's instinct is to increase the thrust--to rev the engine. But if ice crystals have accumulated, increasing the fuel flow only throws more crystals into the fuel lines. So the investigators determined that pilots should do the opposite and idle the engine momentarily. This reduces fuel flow and permits time for heat exchangers in the piping to melt the ice--it takes only seconds--allowing the engines to recover. At least that was the investigators' best guess.

  So in September 2008, the Federal Aviation Administration in the United States issued a detailed advisory with new procedures pilots should follow to keep ice from accumulating on polar flights and also to recover flight control if icing nonetheless caused engine failure. Pilots across the world were somehow supposed to learn about these findings and smoothly incorporate them into their flight practices within thirty days. The remarkable thing about this episode--and the reason the story is worth telling--is that the pilots did so.

  How this happened--it involved a checklist, of course--is instructive. But first think about what happens in most lines of professional work when a major failure occurs. To begin with, we rarely investigate our failures. Not in medicine, not in teaching, not in the legal profession, not in the financial world, not in virtually any other kind of work where the mistakes do not turn up on cable news. A single type of error can affect thousands, but because it usually touches only one person at a time, we tend not to search as hard for explanations.

  Sometimes, though, failures are investigated. We learn better ways of doing things. And then what happens? Well, the findings might turn up in a course or a seminar, or they might make it into a professional journal or a textbook. In ideal circumstances, we issue some inch-thick set of guidelines or a declaration of standards. But getting the word out is far from assured, and incorporating the changes often takes years.

  One study in medicine, for example, examined the aftermath of nine different major treatment discoveries such as the finding that the pneumococcus vaccine protects not only children but also adults from respiratory infections, one of our most common killers. On average, the study reported, it took doctors seventeen years to adopt the new treatments for at least half of American patients.

  What experts like Dan Boorman have recognized is that the reason for the delay is not usually laziness or unwillingness. The reason is more often that the necessary knowledge has not been translated into a simple, usable, and systematic form. If the only thing people did in aviation was issue dense, pages-long bulletins for every new finding that might affect the safe operation of airplanes--well, it would be like subjecting pilots to the same deluge of almost 700,000 medical journal articles per year that clinicians must contend with. The information would be unmanageable.

  But instead, when the crash investigators issued their bulletin--as dense and detailed as anything we find in medicine--Boorman and his team buckled down to the work of distilling the information into its practical essence. They drafted a revision to the standard checklists pilots use for polar flights. They sharpened, trimmed, and puzzled over pause points--how are pilots to know, for instance, whether an engine is failing because of icing instead of something else? Then his group tested the checklist with pilots in the simulator and found problems and fixed them and tested again.

  It took about two weeks for the Boeing team to complete the testing and refinement, and then they had their checklist. They sent it to every owner of a Boeing 777 in the world. Some airlines used the checklist as it was, but many, if not most, went on to make their own adjustments. Just as schools or hospitals tend to do things slightly differently, so do airlines, and they are encouraged to modify the checklists to fit into their usual procedures. (This customization is why, when airlines merge, among the fiercest battles is the one between the pilots over whose checklists will be used.) Within about a month of the recommendations becoming available, pilots had the new checklist in their hands--or in their cockpit computers. And they used it.

  How do we know? Because on November 26, 2008, the disaster almost happened again. This time it was a Delta Air Lines flight from Shanghai to Atlanta with 247 people aboard. The Boeing 777 was at thirty-nine thousand feet over Great Falls, Montana, when it experienced "an uncommanded rollback" of the right No. 2 engine--the engine, in other words, failed. Investigation later showed that ice had blocked the fuel lines--the icing theory was correct--and Boeing instituted a mechanical change to keep it from happening again. But in the moment, the loss of one engine in this way, potentially two, over the mountains of Montana could have been catastrophic.

  The pilot and copilot knew what to do, though. They got out their checklist and followed the lessons it offered. Because they did, the engine recovered, and 247 people were saved. It went so smoothly, the passengers didn't even notice.

  This, it seemed to me, was something to hope for in surgery.

  7. THE TEST

  Back in Boston, I set my research team to work making our fledgling surgery checklist more usable. We tried to follow the lessons from aviation. We made it clearer. We made it speedier. We adopted mainly a DO-CONFIRM rather than a READ-DO format, to give people greater flexibility in performing their tasks while nonetheless having them stop at key points to confirm that critical steps have not been overlooked. The checklist emerged vastly improved.

  Next, we tested it in a simulator, otherwise known as the conference room on my hallway at the school of public health where I do research. We had an assistant lie on a table. She was our patient. We assigned different people to play the part of the surgeon, the surgical assistant, the nurses (one scrubbed-in and one circulating), and the anesthesiologist. But we hit problems just trying to get started.

  Who, for example, was supposed to bring things to a halt and kick off the checklist? We'd been vague about that, but it proved no small decision. Getting everyone's attention in an operation requires a degree of assertiveness--a level of control--that only the surgeon routinely has. Perhaps, I suggested, the surgeon should get things started. I got booed for this idea. In aviation, there is a reason the "pilot not flying" starts the checklist, someone pointed out. The "pilot flying" can be distracted by flight tasks and liable to skip a checklist. Moreover, dispersing the responsibility sends the message that everyone--not just the captain--is responsible for the overall well-being of the flight and should have the power to question the process. If a surgery checklist was to make a difference, my colleagues argued, it needed to do likewise--to spread responsibility and the power to question. So we had the circulating nurse call the start.

  Must nurses make written check marks? No, we decided, they didn't have to. This wasn't a record-keeping procedure. We were aiming for a team conversation to ensure that everyone had reviewed what was needed for the case to go as well as possible.

  Every line of the checklist needed tweaking. We timed each successive version by a clock on the wall. We wanted the checks at each of the three pause points--before anesthesia, before incision, and before leaving the OR--to take no more than about sixty seconds, and we weren't there yet. If we wanted acceptance in the high-pressure environment of operating rooms, the checklist had to be swift to use. We would have to cut some lines, we realized--the non-killer items.

  This proved the most difficult part of the exercise. An inherent tension exists between brevity and effectiveness. Cut too much and you won't have enough checks to improve care. Leave too much in and the list becomes too long to use. Furthermore, an item critical to one expert might not be critical to another. In the spring of 2007, we reconvened our WHO group of international experts in London to consider these questions. Not surprisingly, the most intense disagreements flared over what should stay in and what should come out.

  European and American studies had discovered, for example, that in long operations teams could substantially reduce patients' risks of developing deep veno
us thrombosis--blood clots in their legs that can travel to their lungs with fatal consequences--by injecting a low dose of a blood thinner, such as heparin, or slipping compression stockings onto their legs. But researchers in China and India dispute the necessity, as they have reported far lower rates of blood clots in their populations than in the West and almost no deaths. Moreover, for poor-and middle-income countries, the remedies--stockings or heparin--aren't cheap. And even a slight mistake by inexperienced practitioners administering the blood thinner could cause a dangerous overdose. The item was dropped.

  We also discussed operating room fires, a notorious problem. Surgical teams rely on high-voltage electrical equipment, cautery devices that occasionally arc while in use, and supplies of high-concentration oxygen--all sometimes in close proximity. As a result, most facilities in the world have experienced a surgical fire. These fires are terrifying. Pure oxygen can make almost anything instantly flammable--the surgical drapes over a patient, for instance, and even the airway tube inserted into the throat. But surgical fires are also entirely preventable. If teams ensure there are no oxygen leaks, keep oxygen settings at the lowest acceptable concentration, minimize the use of alcohol-containing antiseptics, and prevent oxygen from flowing onto the surgical field, fires will not occur. A little advance preparation can also avert harm to patients should a fire break out--in particular, verifying that everyone knows the location of the gas valves, alarms, and fire extinguishers. Such steps could easily be included on a checklist.

  But compared with the big global killers in surgery, such as infection, bleeding, and unsafe anesthesia, fire is exceedingly rare. Of the tens of millions of operations per year in the United States, it appears only about a hundred involve a surgical fire and vanishingly few of those a fatality. By comparison, some 300,000 operations result in a surgical site infection, and more than eight thousand deaths are associated with these infections. We have done far better at preventing fires than infections. Since the checks required to entirely eliminate fires would make the list substantially longer, these were dropped as well.