In May of 2000, shortly before I stepped down as editor-in-chief of the New England Journal of Medicine, I wrote an editorial entitled, “Is Academic Medicine for Sale?” It was prompted by a clinical trial of an antidepressant called Serzone that was published in the same issue of the Journal.
The authors of that paper had so many financial ties to drug companies, including the maker of Serzone, that a full-disclosure statement would have been about as long as the article itself, so it could appear only on our Web site. The lead author, who was chairman of the department of psychiatry at Brown University (presumably a full-time job), was paid more than half a million dollars in drug-company consulting fees in just one year. Although that particular paper was the immediate reason for the editorial, I wouldn’t have bothered to write it if it weren’t for the fact that the situation, while extreme, was hardly unique.
Among the many letters I received in response, two were especially pointed. One asked rhetorically, “Is academic medicine for sale? These days, everything is for sale.” The second went further: “Is academic medicine for sale? No. The current owner is very happy with it.” The author didn’t feel he had to say who the current owner was.
The boundaries between academic medicine—medical schools, teaching hospitals, and their faculty—and the pharmaceutical industry have been dissolving since the 1980s, and the important differences between their missions are becoming blurred. Medical research, education, and clinical practice have suffered as a result.
Academic medical centers are charged with educating the next generation of doctors, conducting scientifically important research, and taking care of the sickest and neediest patients. That’s what justifies their tax-exempt status. In contrast, drug companies—like other investor-owned businesses—are charged with increasing the value of their shareholders’ stock. That is their fiduciary responsibility, and they would be remiss if they didn’t uphold it. All their other activities are means to that end. The companies are supposed to develop profitable drugs, not necessarily important or innovative ones, and paradoxically enough, the most profitable drugs are the least innovative. Nor do drug companies aim to educate doctors, except as a means to the primary end of selling drugs. Drug companies don’t have education budgets; they have marketing budgets from which their ostensibly educational activities are funded.
This profound difference in missions is often deliberately obscured—by drug companies because it’s good public relations to portray themselves as research and educational institutions, and by academics because it means they don’t have to face up to what’s really going on.
Industry and academia
No area of overlap between industry and academia is more important than clinical trials. Unlike basic medical research, which is funded mainly by the National Institutes of Health (NIH), most clinical trials are funded by the pharmaceutical industry. In fact, that is where most pharmaceutical research dollars go. That’s because the Food and Drug Administration (FDA) will not approve a drug for sale until it has been tested on human subjects. Pharmaceutical companies must show the FDA that a new drug is reasonably safe and effective, usually as compared with a placebo. That requires clinical trials, in which treatments are compared under rigorous conditions in a sample of the relevant population. The results of drug trials (there may be many) are submitted to the FDA, and if one or two are positive—that is, they show effectiveness without serious risk—the drug is usually approved, even if all the other trials are negative.
Since drug companies don’t have direct access to human subjects, they’ve traditionally contracted with academic researchers to conduct the trials on patients in teaching hospitals and clinics. That practice continues, but over the past couple of decades the terms and conditions have changed dramatically.
Until the mid-1980s, drug companies simply gave grants to medical centers for researchers to test their products, and then waited for the results and hoped their products looked good. Usually the research was investigator-initiated, that is, the question was something the academic researcher thought scientifically important. Sponsors had no part in designing or analyzing the studies, they did not claim to own the data, and they certainly did not write the papers or control publication. Grants were at arm’s length.
Thanks to the academy’s increasing dependence on industry, that distance is a thing of the past. The major drug companies are now hugely profitable, with net incomes consistently several times the median for Fortune 500 companies. In fact, they make more in profits than they spend on research and development (R&D), despite their rhetoric about high prices being necessary to cover their research costs. (They also spend twice as much on marketing and administration as they do on R&D.) The reasons for the astonishing profitability of these companies aren’t relevant here, but suffice it to say that as a result the industry has acquired enormous power and influence. In contrast, medical centers have fallen on difficult times (or so they believe), mainly because of shrinking reimbursements for their educational and clinical missions. To a remarkable extent, then, medical centers have become supplicants to the drug companies, deferring to them in ways that would have been unthinkable even twenty years ago.
Often, academic researchers are little more than hired hands who supply human subjects and collect data according to instructions from corporate paymasters. The sponsors keep the data, analyze it, write the papers, and decide whether and when and where to submit them for publication. In multi-center trials, researchers may not even be allowed to see all of the data, an obvious impediment to science and a perversion of standard practice.
While some new companies—called contract research organizations (CROs)—do clinical research for the drug manufacturers by organizing doctors in private practice to enroll their patients in clinical trials, the manufacturers typically prefer to work with academic medical centers. Doing so increases the chances of getting research published, and, more importantly, provides drug companies access to highly influential faculty physicians—referred to by the industry as “thought leaders” or “key opinion leaders.” These are the people who write textbooks and medical-journal papers, issue practice guidelines (treatment recommendations), sit on FDA and other governmental advisory panels, head professional societies, and speak at the innumerable meetings and dinners that take place every day to teach clinicians about prescription drugs.
Medical centers increasingly act as though meeting industry’s needs is a legitimate purpose of an academic institution.
In addition to grant support, academic researchers may now have a variety of other financial ties to the companies that sponsor their work. They serve as consultants to the same companies whose products they evaluate, join corporate advisory boards and speakers bureaus, enter into patent and royalty arrangements, agree to be the listed authors of articles ghostwritten by interested companies, promote drugs and devices at company-sponsored symposia, and allow themselves to be plied with expensive gifts and trips to luxurious settings. Many also have equity interest in sponsoring companies.
Much of the time, the institutional conflict-of-interest rules ostensibly designed to control these relationships are highly variable, permissive, and loosely enforced. At Harvard Medical School, for example, few conflicts of interest are flatly prohibited; they are only limited in various ways. Like Hollywood, academic medical centers run on a star system, and schools don’t want to lose their stars, who are now accustomed to supplementing their incomes through deals with industry.
Schools, too, have deals with industry. Academic leaders, chairs, and even deans sit on boards of directors of drug companies. Many academic medical centers have set up special offices to offer companies quick soup-to-nuts service. Harvard’s Clinical Research Institute (HCRI), for example, originally advertised itself as led by people whose “experience gives HCRI an intimate understanding of industry’s needs, and knowledge of how best to meet them”—as though meeting industry’s needs is a legitimate purpose of an academic institution.
Much of the rationalization for the pervasive research connections between industry and academia rests on the Bayh-Dole Act of 1980, which has acquired the status of holy writ in academia. Bayh-Dole permits—but does not require, as many researchers claim—universities to patent discoveries that stem from government-funded research and then license them exclusively to companies in return for royalties. (Similar legislation applies to work done at the NIH itself.) In this way, academia and industry are partners, both benefiting from public support.
Until Bayh-Dole, all government-funded discoveries were in the public domain. The original purpose of Bayh-Dole was to speed technology transfer from the discovery stage to practical use. It was followed by changes in patent law that loosened the criteria for granting patents. As a consequence, publicly funded discoveries of no immediate practical use can now be patented and handed off to start-up companies for early development. The start-up companies are often founded by the researchers and their institutions, and they usually either license their promising products to larger companies or are bought by large companies outright.
The result of Bayh-Dole was a sudden, huge increase in the number of patents—if not in their quality. And the most prestigious academic centers now have technology-transfer offices and are ringed by start-up companies. Most technology-transfer offices at academic medical centers don’t make much money, but every now and then one strikes it rich. Columbia University, for example, received nearly $300 million in royalties from more than 30 biotech companies during the seventeen-year life of its patent on a method for synthesizing biological products. Patenting and licensing the fruits of academic research has the character of a lottery, and everyone wants to play.
A less-appreciated outcome of Bayh-Dole is that drug companies no longer have to do their own creative, early-stage research. They can, and increasingly do, rely on universities and start-up companies for that. In fact, the big drug companies now concentrate mainly on the late-stage development of drugs they’ve licensed from other sources, as well as on producing variations of top-selling drugs already on the market—called “me-too” drugs. There is very little innovative research in the modern pharmaceutical industry, despite its claims to the contrary.
Over the past two or three decades, then, academia and industry have become deeply intertwined. Moreover, these links, though quite recent, are now largely accepted as inherent in medical research. So what’s wrong with that? Isn’t this just the sort of collaboration that leads to the development of important new medical treatments?
Medical research
Increasingly, industry is setting the research agenda in academic centers, and that agenda has more to do with industry’s mission than with the mission of the academy. Researchers and their institutions are focusing too much on targeted, applied research, mainly drug development, and not enough on non-targeted, basic research into the causes, mechanisms, and prevention of disease.
Moreover, drug companies often contract with academic researchers to carry out studies for almost entirely commercial purposes. For example, they sponsor trials of drugs to supplant virtually identical ones that are going off patent. And academic institutions are increasingly focused on the Bayh-Dole lottery. A few years ago, the Dana Farber Cancer Institute sent Harvard faculty an invitation to a workshop called “Forming Science-Based Companies.” It began:
So you want to start a company? Join the Provost, Harvard’s Office for Technology and Trademark Licensing (OTTL), leading venture capitalists, lawyers and entrepreneurs for a conference on the basics of forming a start-up based on university technology.
There’s a high scientific opportunity cost in serving the aims of the pharmaceutical industry. For example, new antibiotics for treating infections by resistant organisms are an urgent medical need, but are not economically attractive to industry because they are not likely to generate much return on investment.
In addition to distorting the research agenda, there is overwhelming evidence that drug-company influence biases the research itself. Industry-supported research is far more likely to be favorable to the sponsors’ products than is NIH-supported research. There are many ways to bias studies—both consciously and unconsciously—and they are by no means always obvious. I saw a good number of them during my two decades as an editor of the New England Journal of Medicine. Often, when we rejected studies because of their biases, they turned up in other journals essentially unchanged. And looking back, I now realize that despite our best efforts, we sometimes published biased studies without knowing it. One problem is that we thought that if studies were subjected to rigorous peer review, it was sufficient to disclose authors’ commercial ties—essentially to tell readers caveat emptor, as in the Serzone study I mentioned earlier. I no longer believe that’s enough.
The pharmaceutical industry devotes much, if not most, of its vast marketing budget to what it calls the ‘education’ of doctors.
An important cause of bias is the suppression of negative results. But clinical trials are also biased through research protocols designed to yield favorable results for sponsors. There are many ways to do that. The sponsor’s drug may be compared with another drug administered at a dose so low that the sponsor’s drug looks more powerful. Or a drug that’s likely to be used by older people will be tested in young people, so that side effects are less likely to emerge. The standard practice of comparing a new drug with a placebo, when the relevant question is how it compares with an existing drug, is also misleading. Supporters of the status quo claim that attempts to regulate conflicts of interest will slow medical advances, but the truth is that conflicts of interest distort medical research, and advances occur in spite of them, not because of them.
To be clear, I’m not objecting to all research collaboration between academia and industry—only to terms and conditions that threaten the independence and impartiality essential to medical research. Research collaboration between academia and industry can be fruitful, but it doesn’t need to involve payments to researchers beyond grant support. And that support, as I have argued, should be at arm’s length.
Expert advice
Conflicts of interest affect more than research. They also directly shape the way medicine is practiced, through their influence on practice guidelines issued by professional and governmental bodies and through their effects on FDA decisions.
Consider three examples I’ve written about before: first, in a survey of 200 expert panels that issued practice guidelines, one third of the panel members acknowledged that they had some financial interest in the drugs they assessed. Second, in 2004, after the NIH National Cholesterol Education Program called for sharply lowering the acceptable levels of “bad” cholesterol, it was revealed that eight of nine members of the panel writing the recommendations had financial ties to the makers of cholesterol-lowering drugs. Third, of the 170 contributors to the most recent edition of the American Psychiatric Association’s Diagnostic and Statistical Manual of Mental Disorders (DSM-IV), 95 had financial ties to drug companies, including all of the contributors to the sections on mood disorders and schizophrenia.
Perhaps most important, many members of the eighteen standing committees of experts that advise the FDA on drug approvals also have financial ties to the industry. After the painkiller Vioxx was removed from the market in 2005 (it increased the risk of heart attacks), the FDA convened a panel consisting of two of these committees to consider whether painkillers of the same class as Vioxx should also be removed from the market. Following three days of public hearings, the combined panel decided that, although these drugs—called COX-2 inhibitors—did increase the risk of heart attacks, the benefits outweighed the risks. It therefore recommended that all three of the drugs, including Vioxx, be permitted to remain on the market, perhaps with strong warnings on the labels.
A week after the panel’s decision, however, The New York Times revealed that of the 32 panel members, ten had financial ties to the manufacturers, and that if their votes had been excluded, only one of the drugs would have been permitted to stay on the market. As a result of this embarrassing revelation, the FDA reversed the panel and left only one of the drugs, Celebrex, on the market, with a warning on the label.
Medical education
Conflicts of interest are equally troubling in medical education, where industry influence is perhaps greatest and least justified. The pharmaceutical industry devotes much, if not most, of its vast marketing budget to what it calls the “education” of doctors. The reason is obvious: doctors write the prescriptions, so they need to be won over.
Drug companies support educational programs even within our best medical schools and teaching hospitals, and are given virtually unfettered access to young doctors to ply them with gifts and meals and promote their wares. In most states doctors are required to take accredited education courses, called continuing medical education (CME), and drug companies contribute roughly half the support for this education, often indirectly through private investor-owned medical-education companies whose only clients are drug companies. CME is supposed to be free of drug-company influence, but incredibly these private educators have been accredited to provide CME by the American Medical Association’s Accreditation Committee for Continuing Medical Education—a case of the fox not only guarding the chicken coop, but living inside it.
One of the most flagrant examples of the merging of education and marketing is Pri-Med, which is owned by M/C Communications, one of the largest of the medical-education companies. In partnership with Harvard Medical School, Pri-Med provides CME conferences throughout the country at virtually no cost to those who attend, courtesy of the huge income it receives from industry sponsors. The programs feature industry-prepared symposia during free meals, as well as academic talks by faculty during the rest of the day. The two types of talks are listed separately, but take place at the same meeting, where there is also a gigantic exhibit hall for industry sponsors. The Harvard name and logo figure prominently in Pri-Med’s advertising and at the conferences, in return for which Harvard Medical School receives direct income, as well as payments to participating faculty.
If drug companies and medical educators were really providing education, doctors and academic institutions would pay them for their services. When you take piano lessons, you pay the teacher, not the other way around. But in this case, industry pays the academic institutions and faculty, and even the doctors who take the courses. The companies are simply buying access to medical school faculty and to doctors in training and practice.
This is marketing masquerading as education. It is self-evidently absurd to look to companies for critical, unbiased education about products they sell. It’s like asking a brewery to teach you about alcoholism, or a Honda dealer for a recommendation about what car to buy. Doctors recognize this in other parts of their lives, but they’ve convinced themselves that drug companies are different. That industry-sponsored education is a masquerade is underscored by the fact that some of the biggest Madison Avenue ad agencies, hired by drug companies to promote their products, also own their own medical-education companies. It’s one-stop shopping for the industry.
But doctors do learn something from all the ostensible education they’re paid to receive. Doctors and their patients come to believe that for every ailment and discontent there is a drug, even when changes in lifestyle would be more effective. And they believe that the newest, most expensive brand-name drugs are superior to older drugs or generics, even though there is seldom any evidence to that effect because sponsors don’t usually compare their drugs with older drugs at equivalent doses. In addition, doctors are encouraged to prescribe drugs for uses not approved by the FDA (known as “off-label” prescriptions).
While I favor research collaboration between industry and academia under certain terms and conditions, I believe the pharmaceutical industry has no legitimate role in graduate or post-graduate medical education. That should be the responsibility of the profession. In fact, responsibility for its own education is an essential part of the definition of a learned profession.
No excuses
It’s easy to fault drug companies for much of what I’ve described, and they certainly deserve a great deal of blame. Most of the big drug companies have paid huge fines to settle charges of illegal activities. Last year Pfizer pleaded guilty and agreed to pay $2.3 billion to settle criminal and civil charges of marketing drugs for off-label uses—the largest criminal fine in history. The fines, while enormous, are still dwarfed by the profits generated by these activities, and are therefore not much of a deterrent. Still, apologists might argue that, despite its legal transgressions, the pharmaceutical industry is merely trying to do its primary job—furthering the interests of its investors—and sometimes it simply goes a little too far.
Doctors, medical schools, and professional organizations have no such excuse; the medical profession’s only fiduciary responsibility is to patients and the public.
Drugs licensed from academic institutions are supposed to be made ‘available on reasonable terms’ to the public, but that legal requirement has been ignored.
What should be done about all of this? So many reforms would be necessary to restore integrity to medical research, education, and practice that they can’t all be summarized here. Many would involve congressional legislation and changes in the FDA, including its drug-approval process. But the medical profession also needs to wean itself from industry money almost entirely.
For some time now, I’ve been recommending these three essential reforms:
First, members of medical school faculties who conduct clinical trials should not accept any payments from drug companies except research support, and that support should have no strings attached. In particular, drug companies should have no control over the design, interpretation, and publication of research results. Medical schools and teaching hospitals should rigorously enforce this rule and should not themselves enter into deals with companies whose products are being studied by members of their faculty.
Second, doctors should not accept gifts from drug companies, even small ones, and they should pay for their own meetings and continuing education. Other professions pay their own way, and there is no reason for the medical profession to be different in this regard.
Finally, academic medical centers that patent discoveries should put them in the public domain or license them inexpensively and non-exclusively, as Stanford does with its patent on recombinant DNA technology based on the work of Stanley Cohen and Herbert Boyer. Bayh-Dole is now more a matter of seeking windfalls than of transferring technology. Some have argued that it actually impedes technology transfer by enabling the licensing of early discoveries, which encumbers downstream research. Though the legislation stipulates that drugs licensed from academic institutions be made “available on reasonable terms” to the public, that provision has been ignored by both industry and academia. I believe medical research was every bit as productive before Bayh-Dole as it is now, despite the lack of patents. I’m reminded of Jonas Salk’s response when asked whether he had patented the polio vaccine. He seemed amazed at the very notion. The vaccine, he explained, belonged to everybody. “Could you patent the sun?” he asked.
I’m aware that my proposals might seem radical. That is because we are now so drenched in market ideology that any resistance is considered quixotic. But academic medical centers are not supposed to be businesses. They now enjoy great public support, and they jeopardize that support by continuing along the current path.
And to those academic researchers who think the current path is just fine, I have this to say: no, it is not necessary to accept personal payments from drug companies to collaborate on research. There was plenty of innovative research before 1980—at least as much as there is now—when academic researchers began to expect rewards from industry. And no, you are not entitled to anything you want just because you’re very smart. Conflicts of interest in academic medicine have serious consequences, and it is time to stop making excuses for them.
Editors’ Note: This article is adapted from a talk delivered by Marcia Angell at Harvard University’s Edmond J. Safra Foundation Center for Ethics on December 10, 2009.