Complications Page 5
Many surgeons elsewhere use Shouldice’s distinctive repair method but obtain ordinary rates of recurrence. It’s not the technique alone that makes Shouldice great. The doctors at Shouldice deliver hernia repairs the way Intel makes chips: they like to call themselves a “focused factory.” Even the hospital building is specially designed for hernia patients. Their rooms have no phones or televisions, and their meals are served in a downstairs dining hall; as a result, the patients have no choice but to get up and walk around, thereby preventing problems associated with inactivity, such as pneumonia or leg clots.
After Sang left the patient with a nurse, he found the next patient and walked him straight back into the same operating room. Hardly three minutes had passed, but the room was already clean. Fresh sheets and new instruments were already laid out. And so the next case began. I asked Byrnes Shouldice, a son of the clinic’s founder and a hernia surgeon himself, whether he ever got bored doing hernias all day long. “No,” he said in a Spock-like voice. “Perfection is the excitement.”
Paradoxically, this kind of superspecialization raises the question of whether the best medical care requires fully trained doctors. None of the three surgeons I watched operate at the Shouldice Hospital would even have been in a position to conduct their own procedures in a typical American hospital, for none had completed general surgery training. Sang was a former family physician; Byrnes Shouldice had come straight from medical school; and the surgeon-in-chief was an obstetrician. Yet after apprenticing for a year or so they were the best hernia surgeons in the world. If you’re going to do nothing but fix hernias or perform colonoscopies, do you really need the complete specialists’ training (four years of medical school, five or more years of residency) in order to excel? Depending on the area of specialization, do you—and this is the question posed by the Swedish EKG study—even have to be human?
Although the medical establishment has begun to recognize that automation like the Shouldice’s may be able to produce better results in medical treatment, many doctors are not fully convinced. And they have been particularly reluctant to apply the same insight to the area of medical diagnosis. Most physicians believe that diagnosis can’t be reduced to a set of generalizations—to a “cookbook,” as some say. Instead, they argue, it must take account of the idiosyncrasies of individual patients.
This only stands to reason, doesn’t it? When I am the surgical consultant in the emergency department, I’m often asked to assess whether a patient with abdominal pain has appendicitis. I listen closely to his story and consider a multitude of factors: how his abdomen feels to me, the pain’s quality and location, his temperature, his appetite, the laboratory results. But I don’t plug it all into a formula and calculate the result. I use my clinical judgment—my intuition—to decide whether he should undergo surgery, be kept in the hospital for observation, or be sent home. We’ve all heard about individuals who defy the statistics—the hardened criminal who goes straight, the terminal cancer patient who miraculously recovers. In psychology, there’s something called the broken-leg problem. A statistical formula may be highly successful in predicting whether or not a person will go to a movie in the next week. But someone who knows that this person is laid up with a broken leg will beat the formula. No formula can take into account the infinite range of such exceptional events. That’s why doctors are convinced that they’d better stick with their well-honed instincts when they’re making a diagnosis.
One weekend on duty, I saw a thirty-nine-year-old woman with pain in the right-lower abdomen who did not fit the pattern for appendicitis. She said that she was fairly comfortable and she had no fever or nausea. Indeed, she was hungry, and she did not jump when I pressed on her abdomen. Her test results were largely equivocal. But I still recommended appendectomy to the attending surgeon. Her white blood cell count was high, suggesting infection, and, moreover, she just looked sick to me. Sick patients can have a certain unmistakable appearance you come to recognize after a while in residency. You may not know exactly what is going on, but you’re sure it’s something worrisome. The attending physician accepted my diagnosis, operated, and found appendicitis.
Not long after, I had a sixty-five-year-old patient with almost precisely the same story. The lab findings were the same; I also got an abdominal scan, but it was inconclusive. Here, too, the patient didn’t fit the pattern for appendicitis; here, too, he just looked to me as if he had it. In surgery, however, the appendix turned out to be normal. He had diverticulitis, a colon infection that usually doesn’t require an operation.
Is the second case more typical than the first? How often does my intuition lead me astray? The radical implication of the Swedish study is that the individualized, intuitive approach that lies at the center of modern medicine is flawed—it causes more mistakes than it prevents. There’s ample support for this conclusion from studies outside medicine. Over the past four decades, cognitive psychologists have shown repeatedly that a blind algorithmic approach usually trumps human judgment in making predictions and diagnoses. The psychologist Paul Meehl, in his classic 1954 treatise, Clinical Versus Statistical Prediction, described a study of Illinois parolees that compared estimates given by prison psychiatrists that a convict would violate parole with estimates derived from a rudimentary formula that weighed such factors as age, number of previous offenses, and type of crime. Despite the formula’s crudeness, it predicted the occurrence of parole violations far more accurately than the psychiatrists did. In recent articles, Meehl and the social scientists David Faust and Robyn Dawes have reviewed more than a hundred studies comparing computers or statistical formulas with human judgment in predicting everything from the likelihood that a company will go bankrupt to the life expectancy of liver-disease patients. In virtually all cases, statistical thinking equaled or surpassed human judgment. You might think that a human being and a computer working together would make the best decisions. But, as the researchers point out, this claim makes little sense. If opinions agree, no matter. If they disagree, the studies show that you’re better off sticking with the computer’s judgment.
What accounts for the superiority of a well-developed computer algorithm? First, Dawes notes, human beings are inconsistent: we are easily influenced by suggestion, the order in which we see things, recent experience, distractions, and the way information is framed. Second, human beings are not good at considering multiple factors. We tend to give some variables too much weight and wrongly ignore others. A good computer program consistently and automatically gives each factor its appropriate weight. After all, Meehl asks, when we go to the store, do we let the clerk eyeball our groceries and say, “Well, it looks like seventeen dollars’ worth to me”? With lots of training, the clerk might get very good at guessing. But we recognize the fact that a computer that simply adds up the prices will be more consistent and more accurate. In the Swedish study, as it turned out, Ohlin rarely made obvious mistakes. But many EKGs are in the gray zone, with some features suggesting a healthy heart and others suggesting a heart attack. Doctors have difficulty estimating faithfully which way the mass of information tips, and they are easily influenced by extraneous factors, such as what the last EKG they came across looked like.
It is probably inevitable that physicians will have to let computers take over at least some diagnostic decisions. One network, PAPNET, has already gained mainstream use in the screening of digitized Pap smears—microscopic scrapings taken from a woman’s cervix—for cancer or precancerous abnormalities, which is a job usually done by a pathologist. Researchers have completed more than a thousand studies on the use of neural networks in nearly every field of medicine. Networks have been developed to diagnose appendicitis, dementia, psychiatric emergencies, and sexually transmitted diseases. Others can predict success from cancer treatment, organ transplantation, and heart valve surgery. Systems have been designed to read chest X rays, mammograms, and nuclear-medicine heart scans.
In the treatment of disease, parts of the medical w
orld have already begun to extend the lesson of the Shouldice Hospital concerning the advantages of specialized, automated care. Regina Herzlinger, a professor at the Harvard Business School, who introduced the term “health-care focused factory” in her book Market-Driven Health Care, points to other examples, including the Texas Heart Institute for cardiac surgery and Duke University’s bone-marrow transplant center. Breast cancer patients seem to do best in specialized cancer treatment centers, where they have a cancer surgeon, an oncologist, a radiation therapist, a plastic surgeon, a social worker, a nutritionist, and others who see breast cancer day in and day out. And almost any hospital one goes to now has protocols and algorithms for treating at least a few common conditions, such as asthma or sudden stroke. The new artificial neural networks merely extend these lessons to the realm of diagnosis.
Still, resistance to this vision of mechanized medicine will remain. Part of it may well be short-sightedness: doctors can be stubborn about changing the way we do things. Part of it, however, stems from legitimate concern that, for all the technical virtuosity gained, something vital is lost in medicine by machine. Modern care already lacks the human touch, and its technocratic ethos has alienated many of the people it seeks to serve. Patients feel like a number too often as it is.
Yet compassion and technology aren’t necessarily incompatible; they can be mutually reinforcing. Which is to say that the machine, oddly enough, may be medicine’s best friend. On the simplest level, nothing comes between patient and doctor like a mistake. And while errors will always dog us—even machines are not perfect—trust can only increase when mistakes are reduced. Moreover, as “systems” take on more and more of the technical work of medicine, individual physicians may be in a position to embrace the dimensions of care that mattered long before technology came—like talking to their patients. Medical care is about our life and death, and we’ve always needed doctors to help us understand what is happening and why, and what is possible and what is not. In the increasingly tangled web of experts and expert systems, a doctor has an even greater obligation to serve as a knowledgeable guide and confidant. Maybe machines can decide, but we still need doctors to heal.
When Doctors Make Mistakes
To much of the public—and certainly to lawyers and the media—medical error is fundamentally a problem of bad doctors. The way that things go wrong in medicine is normally unseen and, consequently, often misunderstood. Mistakes do happen. We tend to think of them as aberrant. They are, however, anything but.
At 2 A.M. on a crisp Friday in winter a few years ago, I was in sterile gloves and gown, pulling a teenage knifing victim’s abdomen open, when my pager sounded. “Code Trauma, three minutes,” the operating room nurse said, reading aloud from my pager display. This meant that an ambulance would be bringing another trauma patient to the hospital momentarily, and, as the surgical resident on duty for emergencies, I would have to be present for the patient’s arrival. I stepped back from the table and took off my gown. Two other surgeons were working on the knifing victim: Michael Ball, the attending (the staff surgeon in charge of the case), and David Hernandez, the chief resident (a general surgeon in his final year of training). Ordinarily, these two would have come to supervise and help with the trauma, but they were stuck here. Ball, a dry, cerebral forty-two-year-old, looked over at me as I headed for the door. “If you run into any trouble, you call, and one of us will peel away,” he said.
I did run into trouble. In telling this story, I have had to change some details about what happened (including the names of those involved). Nonetheless, I have tried to stay as close to the actual events as I could while protecting the patient, myself, and the rest of the staff.
The emergency room was one floor up, and, taking the stairs two at a time, I arrived just as the emergency medical technicians wheeled in a woman who appeared to be in her thirties and to weigh more than two hundred pounds. She lay motionless on a hard orange plastic spinal board—eyes closed, skin pale, blood running out of her nose. A nurse directed the crew into Trauma Bay 1, an examination room outfitted like an OR, with green tiles on the wall, monitoring devices, and space for portable X-ray equipment. We lifted her onto the bed and then went to work. One nurse began cutting off the woman’s clothes. Another took vital signs. A third inserted a large-bore intravenous line into her right arm. A surgical intern put a Foley catheter into her bladder. The emergency-medicine attending was Samuel Johns, a gaunt, Ichabod Crane–like man in his fifties. He was standing to one side with his arms crossed, observing, which was a sign that I could go ahead and take charge.
In an academic hospital, residents provide most of the “moment to moment” doctoring. Our duties depend on our level of training, but we’re never entirely on our own: there’s always an attending, who oversees our decisions. That night, since Johns was the attending and was responsible for the patient’s immediate management, I took my lead from him. At the same time, he wasn’t a surgeon, and so he relied on me for surgical expertise.
“What’s the story?” I asked.
An EMT rattled off the details: “Unidentified white female unrestrained driver in high-speed rollover. Ejected from the car. Found unresponsive to pain. Pulse a hundred, BP a hundred over sixty, breathing at thirty on her own . . .”
As he spoke, I began examining her. The first step in caring for a trauma patient is always the same. It doesn’t matter if a person has been shot eleven times or crushed by a truck or burned in a kitchen fire. The first thing you do is make sure that the patient can breathe without difficulty. This woman’s breaths were shallow and rapid. An oximeter, by means of a sensor placed on her finger, measured the oxygen saturation of her blood. The “O2 sat” is normally more than 95 percent for a patient breathing room air. The woman was wearing a face mask with oxygen turned up full blast, and her sat was only 90 percent.
“She’s not oxygenating well,” I announced in the flattened-out, wake-me-up-when-something-interesting-happens tone that all surgeons have acquired by about three months into residency. With my fingers, I verified that there wasn’t any object in her mouth that would obstruct her airway; with a stethoscope, I confirmed that neither lung had collapsed. I got hold of a bag mask, pressed its clear facepiece over her nose and mouth, and squeezed the bellows, a kind of balloon with a one-way valve, shooting a liter of air into her with each compression. After a minute or so, her oxygen came up to a comfortable 98 percent. She obviously needed our help with breathing. “Let’s tube her,” I said. That meant putting a tube down through her vocal cords and into her trachea, which would insure a clear airway and allow for mechanical ventilation.
Johns, the attending, wanted to do the intubation. He picked up a Mac 3 laryngoscope, a standard but fairly primitive-looking L-shaped metal instrument for prying open the mouth and throat, and slipped the shoehornlike blade deep into her mouth and down to her larynx. Then he yanked the handle up toward the ceiling to pull her tongue out of the way, open her mouth and throat, and reveal the vocal cords, which sit like fleshy tent flaps at the entrance to the trachea. The patient didn’t wince or gag: she was still out cold.
“Suction!” he called. “I can’t see a thing.”
He sucked out about a cup of blood and clot. Then he picked up the endotracheal tube—a clear rubber pipe about the diameter of an index finger and three times as long—and tried to guide it between her cords. After a minute, her sat started to fall.
“You’re down to seventy percent,” a nurse announced.
Johns kept struggling with the tube, trying to push it in, but it banged vainly against the cords. The patient’s lips began to turn blue.
“Sixty percent,” the nurse said.
Johns pulled everything out of the patient’s mouth and fitted the bag mask back on. The oximeter’s luminescent-green readout hovered at 60 for a moment and then rose steadily, to 97 percent. After a few minutes, he took the mask off and again tried to get the tube in. There was more blood, and there may have been some swe
lling, too: all the poking down the throat was probably not helping. The sat fell to 60 percent. He pulled out and “bagged” her until she returned to 95 percent.
When you’re having trouble getting the tube in, the next step is to get specialized expertise. “Let’s call anesthesia,” I said, and Johns agreed. In the meantime, I continued to follow the standard trauma protocol: completing the examination and ordering fluids, lab tests, and X rays. Maybe five minutes passed as I worked.
The patient’s sats drifted down to 92 percent—not a dramatic change but definitely not normal for a patient who is being manually ventilated. I checked to see if the sensor had slipped off her finger. It hadn’t. “Is the oxygen up full blast?” I asked a nurse.
“It’s up all the way,” she said.
I listened again to the patient’s lungs—no collapse. “We’ve got to get her tubed,” Johns said. He took off the oxygen mask and tried again.
Somewhere in my mind, I must have been aware of the possibility that her airway was shutting down because of vocal cord swelling or blood. If it was, and we were unable to get a tube in, then the only chance she’d have to survive would be an emergency tracheotomy: cutting a hole in her neck and inserting a breathing tube into her trachea. Another attempt to intubate her might even trigger a spasm of the cords and a sudden closure of the airway—which is exactly what did happen.
If I had actually thought this far along, I would have recognized how ill-prepared I was to do an emergency “trache.” As the one surgeon in the room, it’s true, I had the most experience doing tracheotomies, but that wasn’t saying much. I had been the assistant surgeon in only about half a dozen, and all but one of them had been non-emergency cases, employing techniques that were not designed for speed. The exception was a practice emergency trache I had done on a goat. I should have immediately called Dr. Ball for backup. I should have got the trache equipment out—lighting, suction, sterile instruments—just in case. Instead of hurrying the effort to get the patient intubated because of a mild drop in saturation, I should have asked Johns to wait until I had help nearby. I might even have recognized that she was already losing her airway. Then I could have grabbed a knife and done a tracheotomy while things were still relatively stable and I had time to proceed slowly. But for whatever reasons—hubris, inattention, wishful thinking, hesitation, or the uncertainty of the moment—I let the opportunity pass.