Who needs medical ethics?

Commentary, February 2001

By Sally Satel and Christine Stolba

The greatest risk posed by today’s medical ethicists is not that their advice will directly harm patients. Rather, it is that their presence in hospitals and clinics will tempt physicians to restrict their own job description, reserving to themselves the technical management of illness and injury while leaving the human side of the equation to the appointed experts.

Instilling in doctors a strong commitment to do right by their patients has been a concern of the medical profession since antiquity. Indeed, the ancient Hippocratic Oath-with its pledge to “come for the benefit of the sick” and to refrain from divulging the confidences of patients or engaging in sexual relations with them-is still administered, in one form or another, to the graduates of most medical schools. The American Medical Association also has its own code of ethics, which dates back more than 150 years and includes, in its most updated version, thoughtful guidance on matters like fetal research and end-of-life care.

What is noteworthy about how such questions are handled today is the arrival on the scene of an entirely new sort of specialist: the medical ethicist. Professionals with this title-or the equivalent title of bio- or clinical ethicist-can now be found at the National Institutes of Health and the Public Health Service, on the health committees of Congress and the state legislatures, at biotechnology and managed-care companies, and of course at hundreds of clinics and hospitals.

What these designated advice-givers actually do varies from institution to institution. Where medical research is being conducted, they usually serve on review boards meant to ensure that human subjects give their informed consent and are not placed at unacceptable risk. At managed-care companies, they are often brought in to give their blessing-and thus, it is hoped, some protection against liability-to policies that might one day have to be defended in court.

More often, however, and more controversially, medical ethicists serve as “on-call” consultants at ordinary healthcare facilities, where they offer bedside guidance on issues ranging from the mental competence of patients to whether certain extraordinary treatments should be withheld or withdrawn. By playing this role-and doing so at ever more institutions-the professional practitioners of medical ethics have raised a host of questions, not only about their own expertise and authority but, ultimately, about the very nature of medical care.

Like so many other recent social and cultural movements, the effort to make medical ethics into a specialty of its own emerged from the campus upheavals of the 1960’s. In keeping with the activist spirit of the day, many professors of philosophy decided to leave behind the historical and technical questions that had long dominated their discipline. Instead, they turned their attention to what they called-often as a euphemism for political agitation-applied philosophy, a rubric that covered, among many other things, the life-and-death problems posed by modern medical care. The idea, according to Daniel Callahan, one of the pioneers of bioethics, was to give philosophy “some social bite, some ‘relevance.”‘

Academics were not alone, however, in wanting to look more systematically at medical decisionmaking. Many physicians were interested as well, feeling that their work had become vastly more complicated-and more fraught with difficult moral issues-because of the advent of new technologies and treatments. At the same time, public concern over medical ethics was growing. Revelations in the early 1970’s about the U.S. government’s Tuskegee study-in which hundreds of Alabama sharecroppers with syphilis went untreated-fueled the perception that the profession warranted scrutiny. Questions about the limits of medical authority were raised, too, by the 1975 case of Karen Ann Quinlan, a young New Jersey woman kept alive on a respirator after lapsing into a coma. In a much-publicized decision, the state’s Supreme Court eventually ruled that her father, despite her physician’s opposition, could have the respirator removed.

As a result of these various developments, the 1960’s and 70’s witnessed a remarkable burst of institution-building among the newly self-proclaimed bioethicists and their medical collaborators. Scores of symposia, meetings, and conferences were held on issues like death and dying, organ transplantation, and new fertility techniques. More importantly, the first institutes devoted exclusively to the study of medical ethics were created, with funding from donors like the Rockefeller Foundation and the National Endowment for the Humanities. They were led by Daniel Callahan’s Hastings Center for Bioethics, the Kennedy Institute of Ethics at Georgetown University, and the Society for Health and Human Values.

Nor was professional recognition for the new field long in coming. In 1992, the major accrediting agency for the nation’s healthcare institutions mandated that all hospitals establish formal procedures for dealing with ethical issues arising from patient care. The decision seemed, in effect, to promise full employment for medical ethicists.

Today, the bioethics “industry,” as insiders do not blush to call it, is booming. Some 50 universities in the U.S. now have academic centers focusing on medical ethics, and many more provide courses on the subject as part of their offerings in the humanities. The field has a number of publications of its own, including the Journal of Clinical Ethics, the Cambridge Quarterly of Healthcare Ethics, and the Hastings Center Report, and the largest of its professional organizations, the American Society for Bioethics and Humanities, boasts some 1,600 members.

If the institutional stature of medical ethicists has increased dramatically in recent years so too has their self-assurance. During the field’s infancy, practitioners often exhibited a notable degree of modesty about their ambitions. As Daniel Callahan confessed in 1973, he “resisted with utter panic the idea of participating with the physicians in their actual decisions,” much preferring “the safety of the profound questions I pushed on them.”

Today such modesty is far less common. Albert Jonsen, who teaches bioethics at the medical school of the University of Washington in Seattle, likes to think of himself and those he trains not just as consultants, ready to provide advice when it is solicited, but as “doctor-watchers.” As he describes his role at the hospital,

I follow the little party of doctors, nurses, and medical students to the bedside of very sick people. I read patients’ charts, talk about patients’ ills, participate in discussions about patients’ fates. Although I eschew the pretensions of a white coat and beeper, I admit to some gratification at being “inside.” More than that, I believe that I have some right to be there and that my being there does some good to doctors and patients alike.

What this “good” amounts to, according to medical ethicists, is encouraging doctors to give full consideration to certain key principles in resolving clinical dilemmas. Among these principles are the traditional-if vague-obligation to act for their patients’ benefit and to avoid harming them. In a more modern vein, doctors are urged to respect the “autonomy” of those whom they care for, which typically means obtaining their informed consent for any course of treatment.

A number of bioethicists have taken their mandate a step farther. Harking back to the activist origins of the field, they argue that the immediate clinical setting must be seen as just one part of the wider-and deeply unjust-American healthcare system, in which medical resources are unequally distributed and the special needs of minorities and women are ignored. The members of the International Network on Feminist Approaches to Bioethics, for example, aim to “develop a more inclusive theory of bioethics encompassing the standpoints and experiences of women and other marginalized social groups,” believing as they do that many of the “presuppositions embedded in the dominant bioethical discourse … privilege those already empowered.”

Are any of these principles, from the modest to the politically ambitious, a worthy supplement to the demands of the Hippocratic Oath? For the most part, unfortunately, they are not, and for no more complicated reason than that they can lend themselves to outcomes that are far from selfevidently “ethical.”

Feminist bioethics is especially problematic in this regard. Consider, for instance, one of the cases discussed by Mary Briody Mahowald of the University of Chicago in her recent book, Genes, Women, Equality.* According to Mahowald, a pregnant woman named Julia Smith Andre was told that because of her own metabolic disorder, she would have to follow a restrictive diet to keep her child from having serious birth defects. Andre refused to comply-and the child was born profoundly retarded.

Though it is unclear whether Andre could have been legally compelled to follow the doctor’s prescription, the verdict on her behavior, as an ethical matter, would seem straightforward: she violated her duty to her unborn child. But not so fast, Mahowald insists. Feminist “standpoint” theory, she informs us, grants “privileged status to [Andres] decision regarding diet,” since Andre-at least in Mahowald’s ideologically blinkered view of the situation-is “the person most affected.”

No less potentially dangerous is the relentless emphasis of bioethicists on the idea of personal “autonomy”–that great shibboleth of modern liberal theory–even in cases when patients are plainly incapable of deciding matters for themselves. In 1996, a schizophrenic man named Thomas W. Passmore imagined that he saw “666”-the biblical symbol of the anti-Christ-on his right hand and, terrified, used a circular saw to cut off the offending limb. Surgeons at the hospital wanted to sew the hand back on, but Passmore, still in the grips of his psychosis, refused.

Was this delusional man quickly medicated with antipsychotic drugs and then rushed into the operating room? Alas, no. A group consisting of lawyers, psychiatrists, and a judge, acting in accordance with the dictates of medical ethics and in the name of “autonomy,” decided to abide by Passmore’s wishes: the demonic appendage was not reattached.

The practical issues that confront work-a-day clinical ethicists are not usually so extreme, of course. But even when it comes to the ordinary business of their profession, it is not at all clear what sort of specialized abilities or knowledge these trained “experts” bring to the hospital bedside.

In a study published in the Journal of the American Medical Association, Ellen Fox and Carol Stocking of the University of Chicago asked more than 100 ethics consultants to review seven hypothetical vignettes involving a patient similar to Karen Ann Quinlan-that is, someone who showed no awareness of the outside world, was considered incapable of experiencing pain, and had virtually no chance of regaining consciousness. Fox and Stocking chose this clinical situation because it is the one most often confronted by medical ethicists. Moreover, they were careful to exclude messy, confounding details, thereby making the cases “more straightforward than [those] an ethics consultant typically encounters.” Their finding: an almost complete lack of consensus. Indeed, for six of the seven cases, there was not even a majority response.

Explaining this astonishing result-which might be compared to discovering that 100 doctors had widely divergent opinions on how to handle an appendicitis-is hardly difficult. The fact is that there are few meaningful standards for bedside ethicists. Some are Ph.D.-level academics, others are lawyers, sociologists, or social workers, and still others are physicians or nurses. Nor is any particular course of study required to turn them into members of the ethicists’ guild. Their training runs the gamut from years of specialized doctoral work to the completion of an “intensive” ten-day course, like the one offered annually at the Kennedy Institute of Ethics at Georgetown.

But the absence of professional standards is not the most serious problem afflicting the field; rather, it is the lack of any coherent idea of what even a properly educated medical ethicist might add to a difficult clinical situation. In 1998, the American Society for Bioethics and Humanities attempted to codify the profession’s “core knowledge areas” and “core competencies.” A clinical ethicist, the group suggested, should not only be familiar with such subjects as moral reasoning, health law, and the organization of the healthcare system, but should be able, among other things, to “engage in creative problem-solving,” “listen well,” “communicate interest and respect,” and “distinguish ethical dimensions of [a] case from other overlapping dimensions.”

What is remarkable about this otherwise commonplace list is that such “core” responsibilities should be considered the special domain of medical ethicists. Are they really outside the purview of medical doctors?

Here, of course, is the rub. For despite the help that some bioethicists have occasionally provided in making tough medical decisions, there is no denying that their expanding role in American healthcare directly impinges on the traditional duties of physicians.

In the Journal of the American Medical Association, John La Puma, a physician, described a number of representative ethics consultations that took place in his hospital in the course of a year. One concerned the question of whether an alcoholic should receive a liver transplant; another involved a husband who, hoping for a “miracle,” wanted aggressive measures taken to revive his irreversibly braindead wife.

Both cases were resolved satisfactorily, but one finishes La Puma’s account wondering why it was necessary for an ethicist to be involved in either one. After all, determining whether a patient is a suitable candidate for a liver transplant is not an exotic clinical decision. The physician must determine whether the potential recipient is likely to stop drinking and stay sober, a process that entails learning something about the patient’s character, drinking history, and social habits. Similarly, helping a husband understand his wife’s dim prognosis is the sort of thing that physicians have always been expected to do, and have wanted to do, in fulfillment of the humane intentions that led them to choose medicine in the first place.

There are some doctors, to be sure, who welcome such intervention. A 1992 study found that physicians who requested ethics consultations found them “helpful” or “very helpful” in 86 percent of the cases. Others, no doubt, like the idea of sharing the responsibility-especially, if necessary, the legal responsibility-for the sort of emotionally trying decisions about which a patient’s family might eventually have second thoughts.

But most doctors, it seems, harbor deep-and justifiable-reservations about having an “ethical” intermediary between themselves and their patients. Not only is it intrusive and time-consuming, but, as one physician told the authors of an article in the Journal of Clinical Ethics, it suggests “that the doctor is not sure,” that “he’s looking for affirmation elsewhere” and “can’t make up his mind.” Moreover, as one of his colleagues observed, by turning ethical questions into a specialty, medical institutions are practically inviting doctors “to shunt the responsibility away to a consultant.”

The greatest risk posed by today’s growing cadre of medical ethicists is not that their advice will directly harm patients. Rather, it is that their presence in clinics and hospitals will tempt physicians to restrict their own job description, reserving to themselves the technical management of illness and injury while leaving the human side of the equation to the appointed “experts.”

These twin aspects of doctoring are not so easily separated, however, as the Hippocratic Oath plainly recognizes. The knowledge at a physician’s disposal can be used for good or ill, depending on his professional character. This has always been true, but its importance cannot be emphasized enough at a moment when medical technologies have vastly increased the options available for treating disease and prolonging life. What is urgently needed at the bedside of patients is not self-styled professional ethicists, but ethical doctors. Ghettoizing the moral side of medicine is not the way to produce them.

Footnote: * Oxford, 314 pp., $39.95.