Accountability in Research, Vol. 8, pp. 197–218 © 2001 Taylor & Francis Adverse Events Reporting—The Tip of an Iceberg* Adil E. Shamoo University of Maryland, School of Medicine, 108 N. Greene Street Baltimore, Maryland 21201 NIH data indicate that annually seven million human subjects are enrolled in research sponsored by NIH alone. In addition, there are sixteen federal agencies and numerous departments outside NIH conducting experiments with human subjects. Moreover, the pharmaceutical industry spends $26 billion on research (compared to $16 billion for NIH), thus, the total number of human subjects enrolled in research for both the public and private sectors can be estimated as high as nineteen million. I present data on the potential magnitude of adverse events in the United States among human subjects enrolled in research that appear to be unreported and unaccounted for. We obtained data from the Office for Human Research Protections (OHRP) through the Freedom of Information Act for the years 1990 to August 2000 regarding all Institutional Incident Reports (IRPTs) and a list of Compliance Oversight Branch Investigations (COBIs) involving Multiple Project Assurances (MPAs). In the ten years of reporting for nearly seventy million human subjects, there were only 878 IRPTs and 41 investigations. From the incident reports to OHRP, 44% involved adverse events. Those projects investigated for Multiple Project Assurances violations (41 such investigations) showed that 51% were suspended or terminated. The number of deaths reported to OHRP in ten years for the seventy million human subjects is merely eight. The anticipated number of deaths among the general population of seventy million (assuming subject’s duration in trials is one month) is 51,000. The number of suicides and attempted suicides alone among the seventy million expected research subjects can be anticipated to be about 5,000. Therefore, the number of expected deaths should have been between 5,000 and 51,000. These numbers and percentages represent minimal numbers since they are not a result of random audits or investigations, but a *Presented in-part at the National Conference on: “The Business of Human Experiments, ethical, legal, and regulatory issues,” November 3–5, 2000, Baltimore, Maryland. Tel.: (410) 706-3327. Fax: (410) 706-3189. E-mail: ashamoo@umaryland.edu. 197 198 A.E. Shamoo result of self-reporting or an exogenous complaint. Despite the fact that these are conservative estimates, they represent a significant problem. The purpose of this paper is to present the potential boundaries and magnitude of the problem in the current use of human subjects in research. This is a call for responsible institutions to undertake a thorough evaluation of the problem in order to obtain accurate information. For the immediate future, strong actions need to be taken to increase the protection of human subjects enrolled in research. This precautionary policy is prudent in light of recent revelations and data from this investigation. INTRODUCTION The purpose of this paper is to present the potential magnitude of under-reporting of adverse events in the population of human subjects enrolled in research. Annually, millions of Americans participate as subjects in medical research. While the exact numbers are difficult to determine, as will be discussed later in the paper, a conservative estimate put the number near nineteen million. The understanding of the magnitude and types of problems with the current system will assist us in formulating a realistic protocol design as well as improved drug development. More importantly, the availability of such information enhances the protection of human subjects. Annually, millions of Americans participate as subjects in medical research. While the exact numbers are difficult to determine, as will be discussed later in the paper, a conservative estimate put the number near nineteen million. This paper explores reporting of adverse events that occur in research using human subjects. Prior to the 1970s, there have been several reports regarding questionable practices in the use of human subjects in research in the United States, among them: Beecher’s revelations of twenty two ethically questionable experiments; radiation experiments on unsuspecting patients; and the Willowbrook Hepatitis experiments (see ACHRE’s 1995 report for history of the use of human subjects). This was followed by the revelation of the Tuskegee syphilis study of 399 African-Americans from 1932–1972. In the aftermath of congressional hearings and the Tuskegee report from a Tuskegee Ad hoc committee, Congress enacted the 1974 National Research Act. In the early 1990s, several more detailed reports of improper use of human subjects in radiation experiments were published. This led to the creation of the Advisory Committee on Human Radiation Experiments (ACHRE). In 1995, ACHRE issued its report regarding the radiation experiments. ACHRE’s report also reviewed the many Adverse Events Reporting—The Tip of an Iceberg 199 historical incidences in the United States of disconcerting events involving human subjects. In 1993, we showed that there were serious lapses in the protection of psychiatric patients (Shamoo and Irving, 1993; Shamoo and Keay, 1996; Shamoo et al., 1997). The National Research Act of 1974 (PL 93-348) was the first Federal Act to address the issue of protection of human subjects in research. This act established, for the first time, national requirement for Institutional Review Boards (IRBs) and charged the former Department of Health, Education, and Welfare (DHEW) with the protection of human subjects. The Act also led to the creation what has become known as the “National Commission,” and the Office for Protection from Research Risks (OPRR) within the National Institutes of Health (NIH). Through these steps, Congress required the “Secretary” of DHEW establish a “program” (i.e., OPRR) for “clarification and guidance with respect to ethical issues raised in connection with biomedical or behavioral research involving human subjects . . . ”. The requirement for reporting “unanticipated problems involving risks to subjects or others” appeared first in 1981 (45 CFR 46) without any elaboration. However, since the 1991 version of 45 CFR 46, the reporting requirement is governed by section 46.113 stating: An IRB shall have authority to suspend or terminate approval of research that is not being conducted in accordance with the IRB’s requirements or that has been associated with unexpected serious harm to subjects. Any suspension or termination of approval shall include a statement of the reasons for IRB’s action and shall be reported promptly to the investigator, appropriate institutional officials, and the Department official or Agency head. Also, the reporting requirements are governed by section 103 b(5). Section 103(b) states: . . . . procedures for ensuring prompt reporting to the IRB, appropriate institutional officials, and the Department or Agency head of (i) any unanticipated problems involving risks to subjects or others or any serious or continuing noncompliance with this policy or the requirements or determinations of the IRB; and (ii) any suspension of IRB approval. The exact term “adverse event” does not appear in the “Common Rule.” However, as the OHRP letter to this author indicates the word “Incident Report” includes adverse events. To the knowledge of this author and personal communications with former staff of OPRR, there have not been any additional clarifications, directives, or guidance about reporting requirements other than repeated recitation of 200 A.E. Shamoo the rule given above (Gary Ellis, personal communication). Therefore, the use of the term adverse events in this paper refers to the three categories of reporting requirement consistent with the OHRP letter to this author (OHRP, 2000). For the past eight years, there has been renewed criticism about the lack of protection and harm to subjects in research. We reported on questionable practices that involve sudden withdrawal of drug treatment (Shamoo and Keay, 1996; Shamoo et al., 1997) and the administration of chemicals that provoke psychotic episodes (Shamoo, 1997; Shamoo and Sharav, 1997; Shamoo, 1999, 2000; Sharav and Shamoo, 1999a,b, 2000a,b). This report was followed by allegations of abuse from patients, patient’s advocates, family members, researchers surveying the published literature, conferences, media, hearings of the National Bioethics Advisory Commission (NBAC) (NBAC, 1998), and Congressional hearings (Sharav and Shamoo, 1999a,b). As a result of Congressional hearings, media attention, and NBAC’s hearings, the Director of the National Institute of Mental Health (NIMH) suspended 30 intramural clinical trials (Marshal, 1999). Since 1999, OPRR suspended eight major research institutions for clinical research on human subjects (OPRR, 1999; Brainard, 2000; Black, 2000). In one institution, several administrators were dismissed and new strict safeguards were put in place (Malakoff, 2000). Our reports and the OHRP actions and investigations raise concerns about the protection of human subjects in research. Are there legitimate reasons for concern? In the analysis that follows, we attempt to bring some quantitative information to bear on whether these concerns are justified. We will make the argument that there are, indeed, reasons for concern. The purpose of this paper is to present the potential magnitude of under-reporting of adverse events in the population of human subjects enrolled in research. The understanding of the magnitude and types of problems with the current system will assist us in formulating a realistic protocol design as well as improved drug development. More importantly, the availability of such information enhances the protection of human subjects. METHODS This study takes a three-pronged approach to determining how many adverse events are reported and their significance. First, Adverse Events Reporting—The Tip of an Iceberg 201 information was secured from OHRP on the number of adverse events reports. Second, estimates were made of the number of humans in enrolled in research studies. Third, based on other available information, estimates were made of the expected rate of adverse events. 1. Adverse Events Reported to OHRP Through the use of the Freedom of Information Act (FOIA), we were able to obtain the following from the Office for Human Research Protections (OHRP, 2000). In response to a FOIA submitted on March 14, 2000, OHRP released a body of data covering January, 1990–August, 2000. Adverse events summaries provided to us by OHRP were reported to OHRP by the institutions themselves. Others are reported by complaints or discovered during investigations and site visits. The data were contained in 878 rows of information with eight columns of information. The items in the database were: name of institution, city, state, name of responsible coordinator, assurance number, date opened, date resolved, and comments. We used the name of the institution, year opened, and comments. Comments were categorized as non-compliance; adverse events; action letters; suspension/termination; and deaths. Whenever the plural case was used, for example: adverse events, we counted it as two adverse events since we have no way of knowing the exact number from the information given. The numbers obtained, therefore, are minimal estimates. However, in most cases, the actual number of adverse events was given. Non-compliance meant any violation of the regulation from no quorum to lack of reporting requirement. Action letter means any imposed requirement on the institution by OHRP. a. OHRP’s letter to this author describes the information forwarded to us as: A list of all Institutional Incident Reports (IRPTs) from January 1990 to the present [18 August, 2000]. IRPTs are documents that describe adverse events. Some of these adverse events are reported to us by the institutions themselves; others are reported by complainants or discovered during investigations/site visits. OHRP’s jurisdiction during those dates covers only those studies supported by the Department of Health and Human Services (DHHS). 202 A.E. Shamoo b. OHRP’s letter to the author describes the information forwarded to us as: “A list of Compliance Oversight Branch Investigations (COBIs) resulting in restrictions/actions to Multiple Project Assurances from January 1990 to June 2000.” This was a list of 41 investigations with a one paragraph summary of the nature of the inquiry and the actions taken by the OHRP. These are instances in which OHRP conducted a full investigation of the noncompliance events warranting substantial corrective action, often involving at least limited suspension of specified research while corrective actions were implemented. We classified the actions taken as follows: Suspension/termination; problems with informed consent; requirement for re-review of the protocol by IRB or by others; requirement for the development of an education program; and requirement for continuous reporting to OHRP as frequently as quarterly. 2. Estimate of the Number of Adverse Events that Occur in Human Subjects in the U.S. We will first estimate the number of research subjects in studies supported by NIH from NIH tracking data of women versus men (Roth et al., 2000). We will then make a conservative estimate of the number of human subjects in all other federal programs. The number of human subjects in privately funded clinical trials, such as those from industry, will be estimated by comparing their budgetary expenditure on research to that of NIH using the NIH human subjects per dollar as the multiplier. In order to estimate the occurrence of adverse events in the U.S. population of human subjects, we will utilize two methods: (a) the rate of deaths, suicides, and attempted suicides in the general population to estimate the occurrence of such events within the human subjects population, and (b) the rate of adverse events in health care found by the Institute of Medicine (IOM) report. The pharmaceutical industry spends $26 billion annually on research (from pharma.org website). This compares to nearly 15 billion for NIH (NIH, 2000). The NIH budget for FY1997 was about 15 billion. However, we were not able to obtain the exact figures for FY1997 for industry. This is 1.625 times the NIH budget. This figure was derived assuming that the cost and distribution of patient subjects is basically similar in federally and industry sponsored research. If one takes the funds expended on FDA regulated clinical Adverse Events Reporting—The Tip of an Iceberg 203 trials (a portion of all research involving human subjects) alone, industry expenditure is $3.2 billion and government is $0.8 billion (Center Watch, 2000). The ratio of this fraction of human subject research is even larger (4.0). It is important to note that both NIH and industry conduct part of their research in foreign countries. Therefore, a portion of our estimates include human subjects in those foreign sites. At present, we have no data to determine the actual numbers of human subjects in foreign sites supported by government and industry. It is safe to assume that the overwhelming majority of human subjects supported by U.S. government and industry are U.S. residents. RESULTS Number of Institutional Incident Reports The Office for Human Research Protections (OHRP) (Formerly the Office of Protections from Research Risks [OPRR]) has jurisdiction over research supported by the Department of Health and Human Services (DHHS) agencies. The agencies most relevant to this report are the: National Institutes of Health (NIH), Food and Drug Administration (FDA), Centers for Disease Control and Prevention (CDC), and Substance Abuse and Mental Health Services Administration (SAMHSA). Therefore, OHRP data consist of reports from all DHHS agencies conducting research with human subjects and supported by DHHS funds. It is important to note that OHRP jurisdiction does not include FDA-regulated research (drugs, biologics, and devices). For example, research funded by industry in support of a drug application for marketing in the U.S. to the FDA falls outside OHRP jurisdiction. The regulation of these drug applications does fall within the FDA’s jurisdiction that has its own separate reporting requirement. However, if research is conducted or supported by the FDA (which is minimal), then it falls under OHRP’s jurisdiction. There are three reporting categories of adverse events made to OHRP. These categories are clearly much narrower than the FDA reporting requirement (21CFR56.103(b)(5)) and thus represent (yet again) a minimal figure. The FDA regulation requires that, “all unanticipated problems involving risk to the subject or others,” (21CFR312.66). The “Common Rule” which governs DHHS 204 A.E. Shamoo # of Inst. Incident Reports 180 160 140 120 100 80 60 40 20 0 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 Year Figure 1 Number of Institutional Reports (IRPTs) versus year of opening the incident report case at the Office for Human Subjects Protections (OHRP). agencies for compliance for human subjects protections requires: “Written procedures for ensuring prompt reporting to Department or Agency head of (i) any unanticipated problems involving risks to subjects or others or any serious or continuing non-compliance with this policy or the requirements or determinations of the IRB; and (ii) any suspension or termination of IRB approval.” (Emphasis added by the author.) Since 1991, the regulations required reporting unanticipated problems involving risks to subjects or others. There is very little ambiguity in this language requirement, especially if the problems were unanticipated. As we will see later, (certainly at a minimum) suicides and attempted suicides in the overwhelming number of clinical trials are unanticipated, and clearly unreported. Figure 1, shows the number of Institutional Incident Reports (IRPTs) versus year of opening of the incident report case. The number of incidents in year 2000 is for a half year. Therefore, the figure for the year 2000 could double to about 300 incidents. It is clear that the number of incident reports is on the rise and has risen in linear fashion since the year 1995. The categorization of these reports reveals the number of adverse events (Figure 2, panel A) and the number of Adverse Events Reporting—The Tip of an Iceberg 205 B 6 A 140 5 120 # of Deaths # of Adverse Events 160 100 80 60 4 3 2 40 1 20 0 1990 1992 1994 1996 1998 2000 0 1997 1998 Year 1999 2000 2001 Year Figure 2 Categories in incident report versus year of opening the incident report case. Panel A represents number of adverse events; and Panel B represents the number of deaths. deaths (Figure 2, panel B). Again, a reminder that for year 2000, the figures represent a half-year’s reports. Table I shows for each category of problems the percent of that category to the total number of Institutional Incident Reports (IRPTs). “Adverse Events” represents the major category, as 44% of IRPTs concerned adverse events. The general category of “NonCompliance” was 14%. Action letters represent 25%, and 12% of IRPTs resulted in the suspension/termination of the project or study. Only 1% of IRPTs reported death(s). There were 8 deaths reported in ten years. Table II shows the top five institutions with the highest incident reports. These institutions comprise several of the most prestigious research institutions. Surprisingly, two of them were federal research campuses. TABLE I Problems identified from institutional incident reports (1990–2000) Type Adverse Events Non-Compliance Action Letters Suspensions/Terminations Deaths Percentage 44% 14% 25% 12% 1% 206 A.E. Shamoo TABLE II Top five institutions with the highest incident reports (January, 1990–August, 2000) Number Number Number of Number Number of Incident of Non Adverse of Action of SusReports CompliEvents Letters Ter- pensions ance minations 1. SUNY Buffalo 2. USN Natl Naval Med CTR 3. U Buffalo 4. St. Joseph’s Hospital & Med CTR-Tampa 5. NIH Intramural Program 88 43 1 3 154 1 115 0 1 21 40 39 0 0 4 9 0 66 0 2 22 3 16 0 Estimates of the Number of Human Subjects in Research in the United States Data from NIH reports tracking the participation of women and minorities in research as human subjects (Roth et al., 2000) were obtained. The data show the overall aggregate data on the number of clinical trials and human subjects in the United States funded by NIH. Table III shows the number of studies active in FY1997. There were 606 phase III clinical trials and 2,386 other clinical research projects. This is in addition to the 1,217 clinical projects/research TABLE III Number of studies active in FY1997 Number Percentage of Total Extramural Phase III Trials Other Clinical Research Intramural 606 2,386 1,217 14.40% 56.69% 28.91% Grand Total 4,209 100% Source: Report of Roth et al. (2000), NIH Tracking/Inclusion Committee, “Implementation of the Women and Minorities As Subjects in Clinical Research.” Adverse Events Reporting—The Tip of an Iceberg 207 TABLE IV Aggregate enrollment data for all NIH funded research protocols* FY1997 Extramural Number Total 6,078,099 Total Human Subjects = 7,064,186 Intramural Percentage of Total Number Percentage of Total 100% 986,087 100% *Source: Report of Roth et al. (2000), NIH Tracking/Inclusion Committee, “Implementation of the Women and Minorities As Subjects in Clinical Research.” conducted intramurally at the NIH campus, bringing the total of clinical research/trials supported by NIH to 4,209 studies. The majority of these studies (about 57%) represent clinical research other than phase III trials. The more than four thousand studies conducted annually is an index of the total number of human subjects enrolled in research and supported by NIH. Table IV shows the actual number of human subjects enrolled in research for studies supported by NIH. The total number of human subjects enrolled in research is a little over six million for the extramural research and nearly one million for the intramural research, or a little over seven million. Table V, shows the estimate of the number of human subjects in research in the United States. The first column gives the actual number available for NIH studies. What is not known is the number of human subjects enrolled in research in other Public Health Services (PHS) agencies. For example, Substance Abuse and Mental Health Services (SAMSA) conducts TABLE V Number of human subjects in research in the US FY1997 Funded By Actual # Estimates NIH Other PHS Agencies Others under Common rule (i.e. NSF, EPA, DOE, . . . 17 agencies) Privately Funded FDA Bound & Non-FDA Bound 7,064,186 unknown unknown 7,064,186 10,000? 10,000? unknown 11,479,302* Total 7,064,186 18,563,488? *See methods for details. 208 A.E. Shamoo drug abuse and behavioral health services research all across the country. Let us assume that the number of human subjects enrolled in all these PHS agencies (other than NIH) to be around 10,000 subjects. The next row estimates the number of human subjects enrolled in other federal agencies such as the National Science Foundation (NSF), Environmental Protection Agency (EPA), Department of Energy (DOE) up to a total of seventeen agencies. Again, let us assume 10,000 subjects. These assumptions are clearly low, and hypothetical, but nevertheless, provide us with a number to use in our overall estimates. The last row gives the estimate of the number of human subjects enrolled in research efforts that are privately funded. Most of this research is conducted by the pharmaceutical industry with nearly $26 billion annually (Pharma, 2000). Research funded by the pharmaceutical industry is conducted in various sites: in-house, universities, hospitals, Research Contract Organizations (RCOs), Site Management Organizations (SMOs), and others. By simple comparison to the NIH funding of $16 billion (NIH, 2000), we estimate that in research supported by the pharmaceutical industry alone, the number of human subjects enrolled in research could be as high as 11,479,302. It is well known that the percent expenditure of NIH budget on basic research is larger than the percent expenditure of industry on basic research (NSF, 2000). Therefore, the number of human subjects enrolled in research supported by industry could be even larger. Pharmaceutical industry supported research may be FDA bound or non-FDA bound. In other words, not all research in this category ends up as part of an application in support for an Investigational New Drug (IND) or biologics or devices. Privately funded non-FDA bound research is not regulated, and thus adverse events are not likely to be reported to OHRP. If one takes the funds expended on FDA regulated clinical trials alone (a portion of all research involving human subjects), industry expenditures are $3.2 billion and government is $0.8 billion (Center Watch, 2000). The ratio of human subject research of industry to NIH of 4.0 is even larger than the ratio of total research expenditure of NIH and pharmaceutical industry of 1.625. Moreover, other privately funded research occurs which does not fall in any of these aforementioned categories. For example, the chemical industry tests pesticides on human subjects for pesticide registration with the EPA, but this activity is not regulated by federal law (Gorovitz and Gorovitz, 2000). Another example is clinical research supported by a large number of private foundations. Adverse Events Reporting—The Tip of an Iceberg 209 TABLE VI Problem from the MPAs investigations Type Number of Investigations Suspensions/Terminations Informed Consent Requiring Re-Review Requiring Education Program Requiring Repeated Reporting Number Percent 41 21 14 11 18 29 100% 51% 34% 27% 44% 71% Restrictions/Actions of Compliance Oversight Investigations to Multiple Project Assurances There have been 41 investigations conducted by OHRP (formerly OPRR) in the past ten years (Table VI). Nearly 45% of these investigations occurred in the past two and a half years. There were 21 suspensions/terminations, of which 57% were conducted in the past two and a half years. In all other categories, there is no discernable temporal pattern (data not shown). Table VI shows that 51% of those investigations resulted in suspensions/terminations of the Multiple Projects Assurances (MPAs). Problems with informed consent represent 34% of IRPTs. OHRP required an education program in 44% of IRPTs, and mandated continuous reporting (as high as quarterly) in 71%. Estimate of Suicides and Attempted Suicides among Human Subjects In order to further confirm that eight deaths (as reported) in ten years is misleading, we give an estimate of suicides and attempted suicides among human subjects (Table VII) based on the rates of suicides and attempted suicides in the general population. We give the estimates for one year and ten years (the length of our IRPT data). At the bottom of the table, we state how we arrived at those figures. The second row shows that suicides among NIH supported studies (using the rates of suicides among the general population) should have been reported to be approximately 58 per year with 580 for the ten years. The estimates of attempted suicides among NIH supported studies are 432 per year with 4,320 for the ten years. When we use 210 A.E. Shamoo TABLE VII Estimates of suicides and attempted suicides among human subjects For 1 Year* Deaths reported to OPRR Projected suicides among NIH supported studies Projected attempted suicides among NIH spported studies Sub-total For 10 Years ? 58 8 580 432 4,320 490 4,900 Suicides for all human subjects Attempted suicides for all human subjects 155 1,145 1,550 11,450 Sub-total 1,300 13,000 *Suicides and attempted suicides were calculated by multiplying their annual rates in the general population (suicide rate = 1 × 10−4, attempted suicide rate = 7.4 × 10 −4) (Merck, 2000) by the total estimated number of human subjects shown in Table III then divided by 12 since the time spent by human subjects may not be for 12 months. We took a one month duration of clinical trial as a conservative estimate of the average length of time. the same rates for the general population for all human subjects in research we find 155 suicides/year with 1,550 for the ten years. Attempted suicide for the entire human subjects population is 1,145/yr with 11,450 for the ten years. The number of deaths among the seventy million people enrolled in ten years of research at NIH alone corrected for the one month duration of the study is over 51,000 deaths. Therefore, if we account for just two types of adverse events, suicides and attempted suicides, we find a very large problem with the actual number of reported deaths to NIH for ten years: only 8. DISCUSSION In order to shed light on the magnitude of adverse events, we see from Table I that 44% of incident reports had adverse events. This is a very large number. If this number is an average number of adverse events for all institutions (reporting or not reporting), then this is truly a large figure. However, if this figure is representative of institutions that report incidents of adverse events, it is then logical to ask: Where are the reports from other institutions? It is highly unlikely that these are the total absolute numbers as we will discuss later. Adverse Events Reporting—The Tip of an Iceberg 211 Table II shows the nature and magnitude of problems arising at the five institutions that have had the highest number of incident reports. These institutions are major research organizations. They either have a better reporting system or they simply were deluged with problems. The two federal research institutions, NIH and the Naval Medical Center, should be at the forefront of reporting adverse events. It would be of interest to know the real situation responsible for these numbers. Research institutions have a legal obligation to report unanticipated adverse events that may influence the riskbenefit relationship (45CFR46, 21CFR56, Prentice and Gordon, 1997). The prompt reporting requirement for adverse events is especially true for serious adverse events. Serious adverse events are those that are fatal, life-threatening, or require in-patient hospitalization (Prentice and Gordon, 1997). Estimates of Adverse Events Since 1991, regulations required the prompt reporting of unanticipated problems involving risks to subjects or others (45 CFR 46, 103b(5)). There is very little ambiguity in this language requirement, especially if the problems were unanticipated. The data indicate 878 incident reports for the entire seventy million subjects over a period of ten years. The number of incident reports is on the rise since 1995 (see Figure 1). Multiple factors may underlie this startling increase, increased awareness of the research institution of their reporting responsibility, increased complaints, increased number of studies, and increased number of investigations. The rate of adverse events is 5.5 × 10 −5 or about 55 adverse events per one million subjects. The death rate is 1.14 × 10 −7 or about one death per 10 million subjects. These rates are obviously too low compared to the normal death rate in the population. Recently, attention was focused on the issue of adverse events to patients in hospital settings. The Institute of Medicine (IOM) of the National Academy of Science (NAS) issued a report entitled: “To Err is Human” (Kohn et al., 2000 ). The report defines adverse events as: “. . . an injury caused by medical management rather than by the underlying disease or condition of the patient.” The IOM report estimates that errors cause between 44,000 and 98,000 deaths every year in U.S. hospitals. These estimates are based on the percent of deaths in three studies—one in a New York hospital in 1984 (reported 212 A.E. Shamoo in 1991) and two in 1992 of Colorado and Utah hospitals (Lazarou et al., 1998; Thomas et al., 1999, 2000; Studdert et al., 2000). Then, the report extrapolates the percentage of deaths in these two studies to 33.6 million admissions to U.S. hospitals in 1997. Despite the severe limitation of the IOM study of using ill-defined parameters (Brennan, 2000) and using an extrapolation method based on non-random sampling, this study exposes the important issue of a large number of deaths from the hospital healthcare system. The report does not state whether some of these hospital admissions were related to medical research with human subjects. Our understanding (from the authors) is that human subjects were not included in the data because medical research records are not archived with regular hospital medical records. It is our understanding that medical research records are not archived at all, which will render the task of any study on human subjects very difficult. The reporting and accounting for adverse events in clinical care and research is an important component of outcomes for the wellbeing of patients or subjects. It took many years to finally reveal the magnitude of adverse events including deaths estimated in hospitals nationwide. The IOM report (Kohn et al., 2000) cites that adverse events in the healthcare system can be from 2.9% to 3.7% of patients admitted to hospitals in the early 1990’s. Also, the report indicates that 6.6% to 13.6% of adverse events in healthcare were deaths. A large majority of human subjects enrolled in research may not be hospital patients. Nevertheless, when patients are subjected to medical procedures, adverse events are an expected component of the outcome. Furthermore, medical practitioners and the medical system are essentially similar in both care and research. Using the percentages of adverse events in hospital care in the IOM report, we can estimate the upper limits of adverse events. This is because the adverse events reporting requirements for healthcare are probably wider than those required by the federal regulation. Thus, the upper limits of adverse events for all human subjects enrolled in research (assuming subjects’ duration in trials is one month) in the U.S. varies from 44,850 to 57,228 and deaths vary from 2,960 to 7,780. For the ten year duration of data, adverse events could reach the one half million mark with a number of deaths between 30,000 to 77,000. We believe these numbers are unrealistically high. The eight reported deaths for ten years of all human subjects of nearly seventy million participants enrolled in research supported by NIH, is absurdly low. The “normal” death rate among the 18.56 million people enrolled in Adverse Events Reporting—The Tip of an Iceberg 213 research corrected for the duration per year in a study is 13,533 deaths per year (NCHS, 1998). It is fair to assume that suicides and attempted suicides are unanticipated results in the overwhelming number of clinical trials. This alarming incongruity remains to be explained. However, in the estimates for all human subjects in research, it is important to note that the portion of privately supported clinical trials that are FDA-bound, larger numbers of deaths are reported. For example, Khan et al. (2000), using FDA drug data, found that for seven new antidepressants, the suicide rates were 0.4% to 0.7%, and attempted suicides were 2.7% to 3.4%. These figures are much higher than the general population, in large part due to the prevalence of these kinds of adverse events in this population. Nonetheless, this fact supports our argument that the overall reporting of adverse events to NIH, or those not covered by federal regulations, lacks any credibility. Adverse events should not be allowed to become “trade secrets” as one industry representative claimed (Weiss, 1999). Reports of a mere eight deaths in ten years for over seventy million human subjects in that period leads to the inescapable conclusion that research institutions supported by NIH are failing to report, or are not accurately reporting adverse events including deaths. When investigations were conducted on-site, 51% of the projects were suspended or terminated as a result. Even though there is no direct relationship of suspension/termination to adverse events per se, this indicates an atmosphere of poor compliance with federal regulations. It is important to note that OHRP investigations have never included the audit of medical records of human subjects. Our data do not address the causes of the under-reporting or the lack of reporting adverse events by research institutions. Research institutions could claim that the reporting requirements are ambiguous. On the surface, this explanation may be consistent with the recent increases in reporting adverse events. However, the increased reporting of adverse events does not coincide with any changes in the language of the reporting requirements. Another explanation by research institutions may be that adverse events were reported to IRBs. However, the regulations require reporting to the federal agency. Also, if this claim is true, then we await the publication of such data. The lack of reporting of adverse events is consistent with a letter from a chairman on an IRB at University of Cincinnati, cited by this author during a Congressional hearing (Shamoo, 1999b, p. 63), 214 A.E. Shamoo which states that, “It has become increasingly apparent that adverse event/death reports are not always being filed with the IRB in a timely manner, and in some instances not at all.” This lack of adherence to federal regulations is further supported by the recent testimony of a Government Accounting Office (GAO) official (Rezendes, 2000) after investigating eight Veterans Administration (VA) medical centers. He reports: “a disturbing pattern of noncompliance across the centers we visited. The cumulative weight of the evidence indicated failures to consistently safeguard the rights and welfare of research subjects.” Another supporting piece of evidence is that since gene therapy recently came under scrutiny (Nelson and Weiss, 2000), the reporting of adverse events has increased dramatically. Recently, NIH (NIH website) reports that for the period February 11 to June 1, 2000 (less than 4 months), there were 921 serious adverse events in 402 protocols submitted to the NIH. If the same reporting requirements for all serious adverse events occurring in all human subjects enrolled in research were adhered to, the numbers are likely to be in concordance with our estimates. CONCLUSIONS From our data and from reports in the literature and the media, we can make the following conclusions: 1. The overall number of human subjects enrolled in research is between 10–19 million per year. 2. Deaths in the thousands annually are not reported. 3. Adverse events in the ten of thousands annually go unreported. 4. Research institutions bear the legal and moral responsibility for failing to report adverse events, including deaths, to authorities. 5. The federal government (NIH, OHRP, and the FDA) bears the legal and moral responsibility for failing to provide enforcement of accurate reporting. RECOMMENDATIONS These recommendations are made in the context of the past ten years of publications, conferences, testimonies, and advocacy for Adverse Events Reporting—The Tip of an Iceberg 215 greater protection of human subjects. This paper adds additional credence but it does not constitute an attempt to justify the entire need for greater protections of human subjects in research. In this spirit, the following is recommended to ameliorate the overall problem associated with human subjects protections. 1. The federal government should undertake a thorough study of adverse events for the past ten years among human subjects. 2. The federal government (OHRP) should keep track of the number of all human subjects enrolled in research by organization and project. 3. The federal government should require research organizations to report all adverse events to the federal government. 4. The federal government should require investigation by disinterested individuals of all serious adverse events. 5. The federal government should suspend/terminate a clinical trial once serious adverse events reach a certain threshold (to be determined). 6. The federal government should require that investigations conducted by OHRP should include the auditing of subjects’ medical records and, if necessary, an interview with the subject. 7. The federal government should annually publish the number of human subjects, adverse events, serious adverse events, noncompliance, suspensions/terminations, and other relevant information for each research organization in the country. These figures should be posted on a government website. ACKNOWLEDGMENTS We would like to thank Mary M. Cassidy, Ph.D. of George Washington University and Mary F. Ruzicka, Ph.D. of Seton Hall University for their helpful comments on the manuscript. REFERENCES Advisory Committee on Human Radiation Experiments (ACHRE) (1995). Final Report. Washington, D.C., U.S. Government Printing Office, October. Black, H. (2000). “Research and Human Subjects Recent Temporary Suspensions, Regulation Initiative, Inject Uncertainty into Study Management.” The Scientist 14[19]: 1, Oct. 2. 216 A.E. Shamoo Brainard, J. (2000). “Will a ‘Fresh Face’ Bring a New Approach to Federal Protection of Human Subjects?” Chronicle of Higher Education, July 21, 2000. Brennan, T.A. (2000). “Sounding Board—The Institute of Medicine Report on Medical Errors—Could it Do Harm?” New England J. Medicine, 342: 1123–1125. Center Watch (2000). http://www.centerwatch.com Code of Federal Regulations (CFR) (1998). 21 part 56. Code of Federal Regulations (CFR) (1991). 45 Part 46. Code of Federal Regulation (CFR) (1981). 45 Part 46. Gorovitz, H. and Gorovitz, S. (2000). “Pesticide Toxicity, Human Subjects, and the Environmental Protection Agency’s Dilemma.” The Journal of Contemporary Health Law and Policy 16: 427–458. http://www.salon.com/health/feat ure/2000/08/18/human-subject/print.htm Khan, A., Warner, H.A. and Brown, W.A., (2000). “Symptoms Reduction and Suicide Risk in Patients Treated with Placebo in Antidepressants Clinical Trials.” Arch. Gen. Psych. 57: 311–317. Kohn, L.T., Corrigan, J.M. and Donaldson, M.S., eds (2000). “To Err is human—Building a Safer Health System,” Committee on Quality of Health Case in America, Institute of Medicine, pp. 1–287, National Academy Press, Washington, D.C. Lazarou, J., Pomeranz, B.H. and Corey, P.N. (1998). “Incidence of Adverse Drug Reaction in Hospitalized Patients—A Meta-Analysis of Prospective Studies.” JAMA 279: 1200–1205. Malakoff, D. (2000). “Flawed Cancer Study Leads to Shake-up at University of Oklahoma.” Science 289: 706–707. Marshal, E. (1999). “NIMH to screen studies for science and human risks.” Science 283: 464–5. Merck, Manual website (2000). http://www.merck.com/pubs/mmanual/section15/ chapter190/190a.htm National Bioethics Advisory Commission (1998). “Research Involving Persons with Mental Disorders That May Affect Decisionmaking Capacity.” 1: 1–88, December. National Center for Health Statistics (NCHS) (1998). http://www.cdc.gov/nchswww/datawh/statab/pubd/47−17.htm Nelson, D. and Weiss, R. (2000). “Gene Test Deaths Not Reported Promptly.” The Washington Post, Jan. 31, A1. NIH (2000) website: http://hhs.gov/progorg/asmb/budget/fy99budget/pdffiles/ nihl.pdf National Science Foundation (NSF) (2000). Science and Engineering Indicators-2000 (NSB00–1). Arlington, VA. It can be also accessed through NSF web or government websites. Office for Human Research Protections (OHRP) (2000). FOIA case ID: 25214, August 18, 2000 letter to A.E. Shamoo from Barry W. Boman (FIOA Coordinator). The letter contained: (1) A list of Compliance Oversight Branch Investigations (COBIs) resulting in restrictions/actions to Multiple Project Assurances, from January 1990 to June 2000; and (2) A list of all Institutional Incident Reports (IRPTs) from January 1990 to August 18, 2000. Office of Protection from Research Risks, “Letter of March 22, 1999, from OPRR in VA Greater Los Angeles Healthcare System.” Pharma (2000) website: Annual Report (www.pharma.org). Prentice, E.D. and Gorson, B. (1997). “ IRB Reviews of Adverse Events in Investigational Drug Studies.” IRB 19: 1–4. Adverse Events Reporting—The Tip of an Iceberg 217 Public Law (PL) 93–348, July 12, 1974. Rezendes, V.S. (2000). “VA Research System for protecting Human Subjects Needs Improvement,” GAO Statement before the Subcommittee on Oversight & Investigations, Committee on Veterans’ Affairs, House of Representatives. http://veterns.house.gov/hearings/schedule106/sept00/9–28–00/gao.htm Roth, C., Pinn, V.W., Bates, A. and Faming, L. (2000). NIH Tracking/Inclusion Committee, Implementation of the NIH Guidelines on the Inclusion of Women and Minorities as Subjects in Clinical Research, FY 1997, Second Revision, May 2000, pp. 1–24. Shamoo, A.E. (2000). “Future Challenges to Human Subjects Protection.” The Scientist June 26, 35. Shamoo, A.E. (ed) (1999a). Accountability in Research, 7:[2–4]: 101–309. Shamoo, A.E. (199b). “The Unethical Use of Human Beings in High Risk Research Experiments,” Testimony to the April 21, 1999 Joint Hearing before Subcommittee on Oversight and Investigations and Subcommittee on Health of the Committee on Veterans’ Affairs, House of Representatives, Serial No. 106–10, U.S. Government Printing Office, Washington, D.C. 2000, ISBN 0-16-060247-5. Shamoo, A.E. (1999). “Institutional Review Boards (IRBs) and Conflict of Interest.” Accountability in Research 7: 201–212. Shamoo, A.E. and Sharav, V.H. (1997). “Unethical Use of Persons with Mental Illness in High Risk Research Experiments.” BioLaw II: S23–S31. Shamoo, A.E. (1997). Editor, “Ethics in Neurobiological Research with Human Subjects” pp. 1–335, Gordon and Breach Publishers OPA, Amsterdam, B.V., The Netherlands. Shamoo, A.E., Irving, D.N. and Langenberg, P. (1997). “A Review of Patient Outcomes in Pharmacological Studies from the Psychiatric Literature, 1966–1993.” Science and Engineering Ethics 3: 395–406. Shamoo, A.E. and Irving, D.N. (1993). “Accountability in Research Using Persons with Mental Illness.” Accountability in Research 3: 1–17. Shamoo, A.E. and Keay, T. (1996). “Ethical Concerns about Relapse Studies.” Cambridge Quarterly of Health Care Ethics 5: 373–386. Sharav, V.H. and Shamoo, A.E. (2000a). “Lab Rats—Why do people who participate in clinical studies have fewer protections then animals?” Sharav, V.H. and Shamoo, A.E. (2000b). “Are Experiments That Chemically Induce Psychosis in Patients Ethical?” BioLaw II, No. 1, S:1–S:36. Sharav, V.H. and Shamoo, A.E. (1999a). “Unethical Human Experiments in Unwitting Patients,” Statement for the Record submitted to U.S. Senate Sub-Committee, Public Health and Safety of the Senate Health, Education, Labor and Pensions Committee, February 2, 2000. Sharav, V.H. and Shamoo, A.E. (1999b). “The need for better protections for vulnerable groups when used in research experiments,” Testimony to Subcommittee on Criminal Justice, Drug Policy, and Human Resources, Committee on Government Reform, U.S. House of Representative, United States Government, December 9, 1999, hearing held at New York Lawyers Association, New York, N.Y. Studdert, D.M. et al. (2000). “Negligent Care and Malpractice Claiming Behavior in Utah and Colorado.” Medical Care 38: 250–260. Thomas, E.J. et al. (2000). “Incidence and Types of Adverse Events and Negligent Care in Utah and Colorado.” Medical Care 38: 261–271. 218 A.E. Shamoo Thomas, E.J. et al. (1999). “Cost of Medical Injuries in Utah and Colorado.” Inquiry 36: 255–264. Washington Post, The (2000). “Gene Therapy Run Amok?” Editorial, A18. Weiss, R. (1999). “Gene Therapy Firms Resist Publicity,” The Washington Post, December 11, A2.