Skip to main content
SearchLoginLogin or Signup

Protections for Human Subjects in Research: Old Models, New Needs?

Research within the United States that involves collecting and analyzing certain types of data from individual people has been subject to regulation since 1974. The rules governing research with so-called “human subjects” emerged out of specific research practices in ...

Published onJan 24, 2022
Protections for Human Subjects in Research: Old Models, New Needs?
·

Abstract

Research within the United States that involves collecting and analyzing certain types of data from individual people has been subject to regulation since 1974. The rules governing research with so-called “human subjects” emerged out of specific research practices in the biomedical and social sciences within the government, as well as in response to revelations of egregious abuses of participants. Among other provisions, the federal regulations require prior review of certain proposed research projects by an institutional review board, or IRB. Many of the features of the federal regulations, including requirements for securing “informed consent” from research participants, focus on protections at the point of data collection. More recent research projects in computing and the data sciences, which rely upon large volumes of human-sourced data and information, have often collected their data via third-party platforms rather than via direct interaction with individual human subjects. As a result, such projects have rarely been subject to formal IRB review. Given the potential for harm to individuals and groups that have been identified for some computing projects, such as racial discrimination and loss of privacy, is it time to revisit or expand the existing human-subjects regulations?

Keywords: human-subjects research; informed consent; institutional review boards; big data

Laura Stark
Department of Medicine, Health, and Society
Vanderbilt University

Learning Objectives

• Understand the impetus and origins of US federal regulations regarding protections for “human subjects” in research

• Learn about requirements for “informed consent” when performing research based on data or information collected from human subjects.

• Learn about the intended role of institutional review boards, or IRBs, in the oversight of research involving human subjects.

*Author Disclosure: Portions of this case study are excerpted from Laura Stark, Behind Closed Doors: IRBs and the Making of Ethical Research (Chicago: University of Chicago Press, 2012), and are used with permission.

Introduction

Three men boarded Eastern Airlines Flight 305 out of the Washington, DC, airport on January 6, 1961, en route to Atlanta, Georgia. Two of the men were federal prisoners recovering from an illness that they had gone to the US National Institutes of Health (NIH) not to have cured but to contract. It may seem odd to enter a hospital in good health and to leave while recuperating. But within three years, nearly one thousand prisoners would have similar experiences. Throughout the 1960s, NIH flew and bused federal prisoners from across the country to NIH headquarters in Bethesda, Maryland, to serve in malaria studies and in virus research on pneumonia, flu, the common cold, and simian virus-40.1

The prisoners’ smooth journey from prison to hospital and back again was a remarkable legal accomplishment of two government agencies. And it was enabled by a novel group-review system that had been put in place at the NIH’s Clinical Center, the government’s research hospital, only a few years earlier. The Bureau of Prisons allowed healthy incarcerated men to be used in studies at the Clinical Center starting in 1960, so long as NIH researchers received prior approval from the Clinical Center’s new “Clinical Review Committee” and collected signed agreements from the incarcerated men.2

NIH’s system of review and oversight, including its Clinical Review Committee, eventually became the template for institutional review boards, or IRBs. The form and structure of IRBs were codified in US federal law over the 1970s, and have governed research in the United States involving people—so-called “human subjects,” who may be healthy or ill—in the biomedical sciences and social sciences ever since. The legislation that clarified the definitions and oversight requirements for research on human subjects reflected specific ethical, political, and institutional concerns at the time.3 Given those concerns, we can understand why most of the mandated oversight for research involving people has focused narrowly upon protections for individuals at the point of data collection.

Recent research in several areas of computing and the data sciences also involve collecting people’s data, but these projects have rarely been subject to systematic review within the existing IRB infrastructure. For example, several research projects on facial recognition algorithms have used large collections of images from publicly accessible repositories, rather than collecting photographs directly from individual participants. Yet recent controversies highlight potential negative impacts—for individuals and for communities—that can arise from research that involves the analysis of large volumes of publicly accessible, human-sourced data.4 Unlike traditional concerns in biomedical or social science research, these negative impacts can arise from the uses to which data are put, rather than from the collection of the data per se.

In light of new opportunities as well as challenges arising from uses of human-sourced data and information, it is helpful to examine more closely the evolution of the existing system of oversight and regulation for human-subjects research. Only then might novel needs and effective alternatives become clearer.

The Emergence of Human-Subjects Review Boards in the United States

The US National Institutes of Health started a human-subjects review board, dubbed the Clinical Research Committee, when its research hospital, the Clinical Center, opened in Bethesda in 1953. The Clinical Research Committee became the model for today’s IRBs.

Trying out new chemicals or technologies on healthy people in hospital rooms was—and remains today—a thorny proposition. NIH leaders created a procedure for expert group review not to restrict research but to enable and hasten research on human subjects at the Clinical Center. Researchers were worried about the optics of conducting experiments on healthy people with taxpayer money—what they considered “public relations” issues—and were vexed by the political, ethical, scientific, and, it would seem, interpersonal problems that unfolded in the hospital during the 1950s. They felt it was crucial to figure out how to achieve their primary and pragmatic goal, which was to build a bulwark against legal action. In the words of clinical directors from 1952, research guidelines would serve “as a counter-balance and check to protect not only the patient but the Institute involved and the Clinical Center as a whole.” (In this context, a “patient” was anyone admitted to the Clinical Center for research, whether healthy or sick.) Early NIH leaders, in other words, tried to strike a sensitive balance. To scientists they sought to recruit, the leadership claimed that researchers were free to experiment boldly with few restrictions at the Clinical Center. (See Figure 1.) Leaders also tried to show that they placed responsible limits on research to satisfy federal lawyers, the reading public, and appropriations committees in Congress.5

Figure 1

This individual (reclining) was a healthy human subject at the NIH Clinical Center from 1954 until 1956. Researchers enrolled him in studies on psychoactive drugs (including LSD-25), which involved collecting data with an electroencephalogram. (Source: Courtesy Mennonite Central Committee [MCC] US Photo Collection, Series IX-13-2.5, used with permission.)

The NIH group was informed by recent developments. For example, the 1947 Nuremberg Code had been prompted by revelations about Nazi medical experiments during the Second World War. The 1947 code articulated several moral imperatives for the ethical conduct of research, such as the precept that no experiment should be done in which researchers believed subjects could die or be disabled. The US Department of Defense adopted the Nuremberg Code to guide field research on soldiers in 1953, the same year that NIH adopted group-consideration guidelines. NIH clinicians would have been aware that the armed forces adopted the Nuremberg Code since NIH scientists talked regularly with military officers—the Public Health Service, which includes the NIH, is part of “the uniformed services”—at Washington science venues, such as Atomic Energy Commission meetings, and at the Naval Medical Research Institute across the street.6 Other examples were available at the time, such as the Code of Ethics of the American Medical Association (AMA), which mirrored the Nuremberg Code, since AMA leaders were involved in the prosecution at the Nazi Doctors Trial.

But NIH leaders believed that these codes were either insufficient or altogether silent on the newest and most exciting topic of research: studies on so-called “Normals,” healthy people who could serve as test subjects. “Physicians do not ordinarily practice on normal people,” one NIH administrator, Irving Ladimer, pointed out to his audience at the AMA annual meeting in 1955. “One is therefore concerned with a distinct type of endeavor. The law has not set out permissible limits for this field as it has for medical practice.” There was something different about doing research on healthy citizens in hospitals, Ladimer felt, something that was not articulated in codes of medical ethics intended for care of sick people or to punish Nazis.7

In contrast to both the Nuremberg Code and the AMA Code of Ethics, the NIH Clinical Center adopted guidelines by which a group of researchers who were not involved in a given study would weigh in on the question of whether their colleagues had planned appropriately to protect their research participants. The four-page document that NIH leaders crafted in 1953, with the ponderous title, “Group Consideration of Clinical Research Procedures Deviating from Accepted Medical Practice or Involving Unusual Hazard,” marked a move away from deference to investigators’ personal discretion and toward deference to committee procedures in decisions about the treatment of research participants.8

In 1962, Congress passed a series of amendments to the Federal Food, Drug, and Cosmetic Act, which carried further implications for human-subjects research. The updated law required that pharmaceutical manufacturers demonstrate proof of the safety and effectiveness of new medications before they could receive formal FDA approval, and be prescribed and be sold. The amendments followed revelations that a new medication, thalidomide—which doctors had prescribed frequently to address symptoms like nausea in pregnant women—caused high rates of birth defects. The amendments required manufacturers to demonstrate proof of safety through a four-phase process, starting with tests on a small number of healthy people before moving to tests on sick people. This requirement expanded clinical trials involving healthy volunteers rather than studies involving only sick patients.9 The 1962 amendments to the FDA law placed renewed focus on the question of “informed consent,” though it left open what would count as appropriate methods to document consent. It only required that research subjects be appropriately informed of risks and that they freely agree to participate in a given study, whether verbally or in writing.

Pressure to expand and codify informed consent at NIH grew considerably in 1964. In that year news broke of a case in which two physicians had injected twenty-two patients at New York’s Jewish Chronic Disease Hospital with cancer cells without first getting their consent. Within a community that included many Holocaust survivors, it was particularly upsetting to learn that clinical researchers had injected the patients as part of a study supported in part through the NIH’s National Cancer Institute. Because of this funding connection to NIH, the attorneys for the defendant hospital demanded that the Public Health Service—as the parent organization of the NIH—indemnify the hospital for the damages that it might have to pay to at least one elderly and arthritic plaintiff, Mr. Fink. NIH lawyers successfully argued that researchers funded through the NIH, but who conducted research at facilities other than the NIH Clinical Center, were not bound by the Clinical Center’s local policy. This time the NIH did not have to pay up, but the legal attention to the Clinical Center’s group-review practices prompted worry and defensiveness on the main NIH campus.10

US Senator Jacob Javits (R-NY) learned about the research abuses at the Jewish Chronic Disease Hospital, which was in the state he represented, and pressed the NIH to adopt standard practices for collecting written consent from research participants, much as he had urged the Food and Drug Administration to do a few years earlier. With mounting pressure from Congress and lawsuits looming, NIH director James Shannon deflected legal responsibility—and thus financial risk—for NIH-sponsored research. He spearheaded a policy requiring that researchers at universities and hospitals who planned to do studies funded by NIH first get approval from their own local committee of experts. The Clinical Center’s Clinical Research Committee, a form of ethics peer review, became the model exported to other institutions throughout the United States: each institution that received NIH funding would conduct its own local, group-based expert review.11 The policy took effect in 1966, protected the federal government financially and legally against any ethical missteps in NIH-funded research beyond Bethesda, and set researchers across the United States and globe scrambling to create new expert review committees to vet their studies.

NIH director Shannon’s plan was quickly implemented and expanded. Over the course of 1966, the US Surgeon General, William H. Stewart, issued a series of memos that detailed the review procedures required of institutions that received any funding from the Public Health Service; such funding included money to sponsor a study by a researcher employed at the institution, including clinician-investigators and professors. All such research centers—major universities, hospitals, and community associations—would need to “provide prior review of the judgment of the principal investigator or program director by a committee of his institutional associates.” Moreover, whereas each researcher had previously been given the opportunity—or burden—to create a fresh human-subjects review committee for each project, Surgeon General Stewart now advised creating one permanent committee at each institution to assess all research projects conducted on site. By 1971, the Secretary of the US Department of Health, Education, and Welfare (later renamed Health and Human Services) adopted the Surgeon General’s 1966 policy. Committee review was now required at any institution that received funding from any branch of the federal government, not only from the Public Health Service.12

What had developed as de facto policy within various federal agencies, spurred largely by concerns about the government’s legal liability, took on new urgency in 1972, following the public revelation of the long-running Tuskegee Syphilis Study on Black men in rural Alabama. The project had begun in 1932, when several hundred Black men were recruited for a study overseen by the Public Health Service. About two-thirds of the people had syphilis and the remaining third did not. The subjects in the study received medical exams and meals, but they were not informed that the purpose of the study was, as a journalist later revealed, to “determine through autopsies what damage untreated syphilis does to the human body.”13 For decades, the subjects’ diseases went untreated—long after safe and effective treatments for syphilis had become widely available.14 Public outcry over the Tuskegee Syphilis Study triggered Congress to pass the National Research Act in 1974, which formally empowered the US federal government to regulate research conducted on people.15

Human-Subjects Research Codified

When Congress passed the National Research Act in 1974, it required that a national commission be created to review and ideally improve the laws for human-subjects protections that had just been enacted. That body, formally called the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, worked hard on many thorny issues (such as research involving incarcerated people) for many long years. One of the group’s final acts, required by the 1974 law, was to articulate guiding principles for research on human subjects. In 1979 the group published the Belmont Report (named for the Smithsonian Institution’s conference center in Washington, DC, where the group met), to outline the overarching spirit in which the regulations should be interpreted.16

These guiding principles have endured and continue to hold tremendous sway in research across the globe.17 In writing the “Belmont Principles,” the commission highlighted three moral principles that should guide research involving human subjects: respect for persons, beneficence, and justice. According to the Belmont Report, “respect for persons” means that research subjects should be treated as “autonomous agents,” capable of making decisions about their own best interests. This principle emphasizes that not all people are in a position to freely exercise their judgment and hence require special protections, such as young children or people who are incarcerated, who might be “subtly coerced or unduly influenced to engage in research activities for which they would not otherwise volunteer.” “Beneficence,” meanwhile, means that researchers must take a clear-eyed look at the potential harms of the research (short- and long-term, physical and informational) and, after balancing the harms against potential benefits, “make efforts to secure the well-being” of research subjects. “Justice,” the Belmont Report authors explain, means recognizing that the groups subject to the risks of research may be different from the groups reaping the benefits of research. In the words of the Belmont Report, researchers must consider “who ought to receive the benefits of research and bear its burdens.”18

The commissioners decided that the three principles of respect for persons, beneficence, and justice could be upheld by three corollary practices: ensuring that participants had adequate information when they agreed to be studied, that assurance often taking the form of written consent; ensuring that the risks to participants (whether physical, social, or legal) were appropriate in light of the potential benefits of the study, either for the participants or for others; and making sure that the people being studied were not chosen in discriminatory ways, for example, out of convenience or imbalance of power.

Based on these guidelines, the federal regulations defined what constitutes a “human subject,” as well as a research “intervention.” Today, the regulations are informally called the “Common Rule,” officially known as the US Code of Federal Regulations title 45 (“Public welfare, Department of Health and Human Services”), part 46 (“Protection of Human Subjects”). The regulations define a “human subject” as a

living individual about whom an investigator (whether professional or student) conducting research:

  1. Obtains information or biospecimens through intervention or interaction with the individual, and uses, studies, or analyzes the information or biospecimens; or

  2. Obtains, uses, studies, analyzes, or generates identifiable private information or identifiable biospecimens.

An “intervention” is defined to include “both physical procedures by which information or biospecimens are gathered (e.g., venipuncture) and manipulations of the subject or the subject’s environment that are performed for research purposes.” “Private information” includes any “information about behavior that occurs in a context in which an individual can reasonably expect that no observation or recording is taking place, and information that has been provided for specific purposes by an individual and that the individual can reasonably expect will not be made public (e.g., a medical record),” while “identifiable private information” includes private information “for which the identity of the subject is or may readily be ascertained by the investigator or associated with the information.”19

The National Research Act further codified that nearly all research involving “human subjects” that was conducted in hospitals, universities, or related organizations that receive federal research funding must be vetted by an IRB before it begins. The law further stipulated the composition of IRBs: they must include at least five people, though some boards have as many as twenty. The boards that I observed during the mid-2000s averaged ten members. (My own sociological study of IRBs had to be reviewed and approved by a local IRB before I could begin.)20

The bulk of members of an IRB are supposed to represent “expertise” in the areas in which investigators propose to conduct their research, although no board member who is actually involved in the study under review may vote on it. There must be at least one board member who is not affiliated with the institution, and one member must have “nonscientific” concerns. Operationally, these requirements are often taken to mean that these members should not hold an advanced degree. In some cases, however, these members (sometimes referred to as “community members” or “community representatives”) do have formal research training, as was the case for physicians in private practice and theologians who served on the boards I observed. The federal government strongly urges that boards include at least one man and one woman, and it more gently encourages that members of various minority groups be included on the boards, though the regulations do not, strictly speaking, set requirements for the demographic composition of boards.21

Although the “Common Rule” regulations that guide IRB oversight have been amended over the years—including as recently as 2017—they still focus on protections for human subjects at or around the time of data collection. The recent proliferation of large, publicly available data sets presents conditions that do not always fit easily within the existing regulatory framework.

Human Subjects in an Era of Big Data

When Congress passed the National Research Act in 1974, programmable electronic computers were still mostly limited to large companies and government agencies; in-home personal computers were quite rare. In fact, soon after the act was passed, the company Altair began selling kits with which enthusiasts could build their own computer at home; they sold a few thousand units, each of which sported 256 bytes of memory. That same year, the first commercial internet service provider, Telenet, became available, though widespread access to the internet still lay decades in the future.22

Given the revelations about medical abuses throughout the 1960s and early 1970s, and the relative scarcity of networked computing resources at the time, it is little surprise that the authors of the Belmont Report focused on specific aspects of research involving human subjects. Yet in recent years, potential harms from research in computing and data sciences that involve large volumes of human-sourced data have highlighted a growing gap between existing protections for human subjects and present-day research practices.

Consider, for example, research projects on facial-recognition algorithms, which depend upon huge collections of images. Some researchers have assembled their data sets by collecting facial images from publicly available websites or social media platforms. Individual photographs on these sites are often posted in accordance with the platforms’ terms of service, which may not explicitly limit or constrain reuse by third parties.23 Unlike biomedical experiments in which researchers intervene directly with human subjects to collect specimens or other data, in other words, such computational research projects do not involve direct interaction with research subjects at the point of data collection.

Yet harms have nonetheless occurred—both to individuals and to members of specific communities, including minorized groups—from the deployment of such computational tools. Independent auditors at the US. National Institute of Science and Technology (NIST) have catalogued systemic biases in the performance of nearly two hundred distinct facial recognition algorithms, some of which yield false-positive rates one hundred times greater for faces of Black men from West Africa than for faces of white men from Eastern Europe; these same auditors found systematically higher false-positive rates for images of women than for men, across racial and ethnic groups.24

Whereas these findings have galvanized the research community to accelerate research on mitigation techniques, many of these algorithms are already deployed across thousands of separate law-enforcement jurisdictions throughout the United States. When these research projects leave laboratories and enter public settings, systemic biases have real-world impacts, as several recent instances of wrongful arrest based on faulty facial recognition matches have made plain.25 Similar concerns have been raised about protections for individuals’ sensitive personal information. Even when each data set involved in a computational project has been “deidentified”—with personal information about specific individuals masked or removed—such information can nonetheless be reidentified by the combination of seemingly deidentified data sets.26

In each of these examples, little to no harm to individuals arose directly from the act of collecting human-sourced data; in many instances, individuals had “consented” to share their personal information (such as photographs) on publicly accessible websites and platforms, albeit without knowledge about research projects in which their data might be utilized. Nonetheless, individuals and groups have suffered harms.

In 2012, the US Department of Homeland Security published a corollary to the Belmont Report for research in computer science and information security. Called the Menlo Report, the guidelines are explicitly modeled on the 1979 Belmont Report for the biomedical and behavioral sciences and extend the Belmont Principles to research “involving information and communication technology.” The Menlo Report also added a fourth principle, “Respect for Law and Public Interest,” in addition to the three Belmont Principles. This fourth principle emphasizes “compliance, transparency and accountability” for research in computer science and big data projects.27

Yet the Menlo Report has no regulatory equivalent—an important contrast with the Belmont Report, which is enforced in federal law through 45 CFR 46 (that is, the “Common Rule”). There is no binding legislation or regulatory requirement to implement or enforce the commitments articulated within the Menlo Report. Although the Menlo Report gets symbolic authority from being modeled on the Belmont Report, it remains unclear whether the guidelines have traction with researchers and whether researchers have the support—in terms of time, training, and administrative structure—to practice what the Menlo Report promotes. Some would say the lack of regulation is to the field’s detriment. Others might ask whether the Belmont Principles are the best model, given the modern practices in how data are collected and used, and whether creating an ethics peer-review system requires regulation.28

Conclusions

Federal regulations that govern research involving human subjects within the United States evolved during the middle decades of the twentieth century. The National Research Act of 1974, which first codified procedures and protections for human-subjects research at a federal level, reflected a mixture of ethical concerns and legal considerations; it was also a response to very specific, egregious abuses. Today the Common Rule defines and regulates “human subjects” and “interventions” in a research setting—and mandates the oversight of research projects by a local institutional review board.

Although the Common Rule continues to apply to research across the biomedical and social sciences in the United States, these provisions provide an imperfect fit for many types of research projects in computing and the data sciences. For example, legal requirements for “informed consent” have not been applied uniformly to instances in which individuals’ identifiable personal information is collected from third-party sources.29

Some scholars have argued that the continuing emphasis within US federal regulations—which largely focus on protections for human subjects at the point of data collection—may no longer be sufficient. Moreover, the authors of the 1979 Belmont Report had taken for granted a clear separation between “research” and “practice,” especially clinical practice in a biomedical context. Yet the fluidity with which big-data projects now move between research settings and real-world deployment blurs such distinctions.30

Questions about the appropriate collection and use of human-sourced data have taken on a new urgency. Ubiquitous computing is becoming a facet of life for more and more people, and data about individuals’ seemingly private behaviors are being incorporated into large data sets available for research. The critical question is how researchers in computing and data science should ethically respond.

Discussion Questions

• What are some of the differences between potential harmful impacts for individuals or groups stemming from research projects in biomedicine versus data sciences?

• Current US federal regulations require oversight by an institutional review board (IRB) for any research project that “obtains, uses, studies, analyzes, or generates identifiable private information” about “living individuals.” What are some examples of research projects in computing or the data sciences in which such a regulation might apply? What are examples in which such research should qualify for an exemption?

• The Common Rule” governing protections for human subjects in research was formulated in response to specific challenges and abuses, largely within biomedical research. Do you think the existing regulations provide appropriate or sufficient oversight for research involving human-sourced data and information today?

Bibliography

Anonymous. “Shared, But Not Up for Grabs.” Nature Machine Intelligence 1 (April 2019): 163. https://doi.org/10.1038/s42256-019-0047-y

Comfort, Nathaniel. “The Prisoner As Model Organism: Malaria Research at Stateville Penitentiary.” Studies in History and Philosophy of Science, Part C 40, no. 3 (2009): 190–203. https://doi.org/10.1016/j.shpsc.2009.06.007

Drozdowski, Pawel, Christian Rathgeb, Antitza Dantcheva, Naser Damer, and Christoph Busch. “Demographic Bias in Biometrics: A Survey on an Emerging Challenge.” IEEE Transactions on Technology and Society 1, no. 2 (June 2020): 89–103. https://doi.org/10.1109/TTS.2020.2992344

Fiesler, Casey. “Ethical Considerations for Research Involving (Speculative) Public Data.” Proceedings of the ACM on Human-Computer Interactions 3 (2019): 249. https://doi.org/10.1145/3370271

Fiesler, Casey, Nathan Beard, and Brian C. Keegan. “No Robots, Spiders, or Scrapers: Legal and Ethical Regulation of Data Collection Methods in Social Media Terms of Service.” Proceedings of the International AAAI Conference on Web and Social Media 14, no. 1 (2020): 187–96. https://ojs.aaai.org/index.php/ICWSM/article/view/7290

Gray, Mary L. “Big Data, Ethical Futures.” Anthropology News, January 13, 2017. https://doi.org/10.1111/AN.287

Greene, Jeremy A., and Scott H. Podolsky. “Reform, Regulation, and Pharmaceuticals: The Kefauver-Harris Amendments at 50.” New England Journal of Medicine 367 (October 18, 2012): 1481–83. https://doi.org/10.1056/NEJMp1210007.

Grother, Patrick, Mei Ngan, and Kayee Hanaoka. “Face Recognition Vendor Test (FRVT), Part 3: Demographic Effects.” Report NISTIR 8280. Washington, DC: National Institute of Standards and Technology, December 2019. https://nvlpubs.nist.gov/nistpubs/ir/2019/NIST.IR.8280.pdf

Heller, Jean. “AP Was There: Black Men Untreated in Tuskegee Syphilis Study.” Associated Press, May 10, 2017. https://apnews.com/article/business-science-health-race-and-ethnicity-syphilis-e9dd07eaa4e74052878a68132cd3803a

Jones, James H. Bad Blood: The Tuskegee Syphilis Experiment, rev. ed. New York: Free Press, 1993.

Katz, Jay. Experimentation with Human Beings. New York: Sage, 1972.

Ladimer, Irving. “Human Experimentation: Medicolegal Aspects.” New England Journal of Medicine 257, no. 1 (1957): 18–24. https://doi.org/10.1056/NEJM195707042570105

Langer, Elinor. “Human Experimentation: New York Verdict Affirms Patients’ Rights.” Science 151, no. 3711 (1966): 663–66. https://doi.org/10.1126/science.151.3711.663

Metcalf, Jacob, and Kate Crawford. “Where Are Human Subjects in Big Data Research? The Emerging Ethics Divide.” Big Data & Society (June 2016): 1–14. https://doi.org/10.1177/2053951716650211

Ohm, Paul. “Broken Promises of Privacy: Responding to the Surprising Failure of Anonymization.” UCLA Law Review 57 (2010): 1701–77. https://www.uclalawreview.org/broken-promises-of-privacy-responding-to-the-surprising-failure-of-anonymization-2/

Reverby, Susan M. “Ethical Failures and History Lessons: The U.S. Public Health Service Research Studies in Tuskegee and Guatamala.” Public Health Reviews 34, no. 1 (2012): 13. https://doi.org/10.1007/BF03391665

Perkowitz, Sidney. “The Bias in the Machine: Facial Recognition Technology and Racial Disparities.” MIT Case Studies in Social and Ethical Responsibilities of Computing, no. 1 (Winter 2021). https://doi.org/10.21428/2c646de5.62272586

Petryna, Adriana. When Experiments Travel: Clinical Trials and the Global Search for Human Subjects. Princeton, NJ: Princeton University Press, 2009.

Stark, Laura. Behind Closed Doors: IRBs and the Making of Ethical Research. Chicago: University of Chicago Press, 2012.

Tobbell, Dominique A. Pills, Power, and Policy: The Struggle for Drug Reform in Cold War American and Its Consequences. Berkeley: University of California Press, 2011.

US Department of Health and Human Services. The Belmont Report. 1979. https://www.hhs.gov/ohrp/regulations-and-policy/belmont-report/index.html

US Department of Homeland Security. The Menlo Report. 2012. https://www.dhs.gov/sites/default/files/publications/CSD-MenloPrinciplesCORE-20120803_1.pdf

Van Noorden, Richard. “The Ethical Questions That Haunt Facial-Recognition Research.” Nature 587 (19 November 2020): 354–58. https://www.nature.com/articles/d41586-020-03187-3

Weindling, Paul. “The Origins of Informed Consent: The International Scientific Commission on Medical War Crims and the Nuremberg Code.” Bulletin of the History of Medicine 75, no. 1 (2001): 37–71. https://doi.org/10.1353/bhm.2001.0049.

Welsome, Eileen. The Plutonium Files: America’s Secret Medical Experiments in the Cold War. New York: Dial, 1999.

Comments
0
comment
No comments here