Journal Articles, Book Chapters, & White Papers by Michelle Meyer
A genome-wide association study of educational attainment was conducted in a discovery sample of ... more A genome-wide association study of educational attainment was conducted in a discovery sample of 101,069 individuals and a replication sample of 25,490. Three independent SNPs are genome-wide significant (rs9320913, rs11584700, rs4851266),
and all three replicate. Estimated effects sizes are small (R2≈ 0.02%), approximately 1 month of schooling per allele. A linear polygenic score from all measured SNPs accounts for ≈ 2% of the variance in both educational attainment and cognitive function. Genes in the region of the loci have previously been associated with health, cognitive, and central nervous system phenotypes, and bioinformatics analyses suggest the involvement of the anterior caudate nucleus. These findings provide promising candidate SNPs for follow-up work, and our effect size estimates
can anchor power analyses in social-science genetics.
Bookmarks Related papers MentionsView impact
“Practitioners” — whether business managers, lawmakers, clinicians, or other actors — are constan... more “Practitioners” — whether business managers, lawmakers, clinicians, or other actors — are constantly innovating, in the broad sense of introducing new products, services, policies, or practices. In some cases (e.g., new drugs and medical devices), we’ve decided that the risks of such innovations require that they be carefully introduced into small populations, and their safety and efficacy measured, before they’re introduced into the general population. But for the vast majority of innovations, ex ante regulation requiring evidence of safety and efficacy neither does — nor feasibly could — exist. In these cases, how should practitioners responsibly innovate?
Most commonly, innovation is ad hoc and intuition-driven. Sometimes, a practitioner will attempt to rigorously determine the effects of a novel practice by comparing it to an alternative possible innovation or to the status quo. Not infrequently, if those subject to such A/B testing (as marketers and data scientists refer to it) or experimentation (as scientists in other fields call it) are fully informed about it, the results will be badly biased. In those cases, the practitioner may undertake the exercise more or less in secret, at least initially.
Practices that are subject to A/B testing generally have a far greater chance of being discovered to be unsafe or ineffective, potentially leading to substantial welfare gains. Yet the conventional wisdom is that “human experimentation” is inherently dangerous and human experimentation without informed consent is always unethical.
Facebook recently learned this lesson the hard way after the public learned about a 2012 experiment it conducted to determine the effects of News Feed, an innovation the company had launched in 2006 that marked a major shift in how now 1.44 billion people allocate their time and in the way they observe and interact with others. Academic studies have suggested two contradictory hypotheses about the risks of News Feed: that exposure to friends’ positive posts is psychologically risky (through a social comparison mechanism) and that exposure to negative posts is psychologically risky (through an emotional contagion mechanism). But these contradictory studies were mostly small and observational. The company alone was in a position to rigorously determine the mental health effects of its service, and to do so relatively cheaply. And for one week in January of 2012, it conducted an experiment in which it attempted to do just that.
How should we think about Facebook’s decision to conduct an experiment? Reaction was in fact swift and fierce. Criticism by both the public and some prominent ethicists centered on the fact that the 700,000 or so users involved had not consented to participate in what appeared to be a study designed to psychologically harm users by manipulating their emotions. Critics charged Facebook with exploiting its position of power over users, treating them as mere means to the corporation’s ends, and depriving them of information necessary for them to make a considered judgment about what was in their best interests. Some demanded federal and state investigations and retraction of the published results of the experiment.
But this considerable discussion paid scant attention to the experiment’s relationship to Facebook’s underlying practice of algorithmically curating users’ News Feeds and its risks and uncertainties which, after all, were imposed on 1.44 billion users without their knowledge or consent. In this article, using the Facebook emotional contagion experiment and, to a lesser extent, the OkCupid matching algorithm experiment, as case studies, I explore two frames through which we can think about these and similar corporate field experiments. The first frame is the familiar one used by ethicists and regulators to govern human subjects research. Contrary to popular belief, this frame, articulated in the Belmont Report and codified in the federal Common Rule, appropriately permits prima facie duties to obtain subjects’ informed consent to be overridden when obtaining consent would be infeasible and risks to subjects are no more than minimal — criteria, I argue, that there are good reasons to believe applied to the Facebook experiment.
The second frame contextualizes field experiments against the backdrop of the underlying practice they’re designed to study. Foregrounding the experimenter’s role as a practitioner, it asks how she ought to responsibly innovate and about the appropriate role of experiments in that innovation process. Experiments involving a tight fit between the population upon whom (no more than minimal) risks are imposed and the population that stands to benefit from the knowledge produced by a study may not only be ethically permissible; where they are conducted by the innovator who is both the proximate cause and cheapest avoider of any innovation-related costs, these experiments may be ethically laudable or even obligatory.
Like Rubin’s vase, where viewers vacillate between seeing a vase or two opposing faces in profile, each of these two frames becomes salient by bringing some aspect of the overall situation to the foreground: either the experiment or the practice whose effects it tests. But almost everyone saw the Facebook experiment through the first framework of human subjects research every time, and never through the second framework of responsible innovation. Why?
Using the OkCupid experiment as a mini case study, I dub the “A/B illusion” the widespread tendency to view a field experiment designed to study the effects of an existing or proposed practice as more morally suspicious than a practitioner’s alternative of immediately implementing an untested practice. The A/B illusion, which the Common Rule lamentably fosters, can cause us to overregulate research and underprotect a practice’s end-users. For instance, given probative but inconclusive evidence that News Feed psychologically harms users through exposure to negative and/or positive posts, Facebook’s unique position to establish the effects of its innovation, and the relative ease with which it could do so, most criticisms of the Facebook experiment reflect the A/B illusion and should be inverted: It is not the practitioner who engages in A/B testing but the practitioner who simply implements A who is more likely to exploit her position of power over users or employees, to treat them as mere means to the corporation’s ends, and to deprive them of information necessary for them to make a considered judgment about what is in their best interests.
Bookmarks Related papers MentionsView impact
Scholars and lawmakers expend much effort determining optimal incentives to innovate, but almost ... more Scholars and lawmakers expend much effort determining optimal incentives to innovate, but almost entirely neglect the regulation of knowledge-producing activities themselves. This Article critically examines that regulatory framework, adopted by more than one dozen federal agencies in the U.S. and many other countries, which governs the vast majority of those knowledge-producing activities that have the greatest potential to affect human welfare: research involving human beings, or “human subjects research” (HSR). It focuses on the primary actors in the regulation of HSR — licensing committees called Institutional Review Boards (IRBs) which, before each study may proceed, must find that its risks to participants are “reasonable in relation to” its expected benefits for both participants and society. It argues for a particular interpretation of this risk-benefit standard and, drawing on scholarship in psychology, economics, neuroscience and other fields, argues that participant heterogeneity prevents IRBs from carrying out their regulatory duty. Instead, the regulatory system implicitly responds to the heterogeneity problem with risk aversion that is costly not only to researchers and society but, critically, to would-be research participants. The Article concludes by laying out the policy options that remain in the wake of the heterogeneity problem’s intractability: continuing the legal fiction of risk-benefit analysis, honestly embracing the heterogeneity problem and its costs, or jettisoning IRB risk-benefit analysis. A companion Article develops the possibility of the third option.
Bookmarks Related papers MentionsView impact
On July 26, 2011, federal regulators issued an advance notice of proposed rulemaking (ANPRM) outl... more On July 26, 2011, federal regulators issued an advance notice of proposed rulemaking (ANPRM) outlining proposed changes to the regulations that govern human subjects research, which have been adopted by 18 federal agencies and departments (and are better known as the Common Rule). As suggested by its subtitle — Enhancing Protections for Research Subjects and Reducing Burden, Delay, and Ambiguity for Investigators — the ANPRM attempts to appease both those who claim that human subjects research is currently overregulated and those who claim that it is underregulated. It proposes to do so by shifting scarce regulatory resources from “low risk” studies, where they unnecessarily burden research, to studies that “pose risks of serious physical or psychological harm,” which currently suffer from insufficiently rigorous review and thereby endanger participant welfare. That is, like regulators in many other sectors responding to charges of over- or under-regulation, the ANPRM’s architects seek to render research regulation “risk-based,” by making the kind and extent of IRB review and agency oversight proportionate to the riskiness of the research. In this respect, the ANPRM is just the United States’s contribution to a global trend toward risk-proportionate regulation of a wide range of activities, including food safety, medicine, work safety, environmental protection, and financial regulation.
In pressing for increased risk-based research regulation, the ANPRM cleverly exploits what is perhaps two camps of critics’ only common ground: a shared faith in the possibility of regulators making objectively “correct” risk-benefit assessments, and their too-frequent failure to do the same, as demonstrated in part by widespread variation in IRB decisions regarding similar and even identical protocols. (Here the two camps part ways, with one camp emphasizing Type I errors, in which “unreasonably risky” research is allowed to proceed, and the other emphasizing Type II errors, in which important but “low-risk” research is rejected, altered or delayed.) Like the opposing camps of critics it seeks to appease, risk-proportionate research regulation itself assumes a meaningful way for regulators to distinguish “low-” from “high-risk” research. That is, it requires a basis on which some social planner can, in advance and with respect to all prospective participants in a particular study or category of research, deem some research-related harms insufficiently probable or significant to warrant the full panoply of protections afforded participants in other studies.
This chapter calls that assumption into question. Because prospective research participants are heterogeneous in their preferences and other circumstances, the same protocol will offer a different risk-benefit profile for different participants. (IRB variation, then, should not surprise us in the least; much of it is likely simply a reflection of the fact that all individuals, including those who serve on IRBs, vary in their preferences regarding research risks, benefits, and trade-offs between the two.) Before discussing this challenge from participant heterogeneity, I provide an overview of risk-based regulation and discusses two other notable challenges in applying such a regulatory framework to human subjects research: heterogeneity among regulated research activities and regulator biases. I conclude by suggesting an alternative way to redistribute scarce regulatory resources that embraces, rather than ignores, all three challenges. Although I focus on U.S. governance of human subjects research, the analysis is more broadly applicable in light of the fact that the U.S. has essentially exported this system to many other nations.
Bookmarks Related papers MentionsView impact
Bookmarks Related papers MentionsView impact
Essay on the ethics of disclosing highly preliminary research results regarding CTE to participants.
Bookmarks Related papers MentionsView impact
As part of issuing its report and recommendations to President Clinton, the National Bioethics Ad... more As part of issuing its report and recommendations to President Clinton, the National Bioethics Advisory Commission heard testimony from several religious ethicists and theologians regarding different faith traditions' views of research involving human stem cells. This summary and analysis of that testimony was published as an appendix to the Commission's report.
Bookmarks Related papers MentionsView impact
SSRN Electronic Journal, 2000
ABSTRACT This paper addresses the extent to which the rights of privacy and reproductive liberty ... more ABSTRACT This paper addresses the extent to which the rights of privacy and reproductive liberty protected by the United States Constitution prevent states from regulating assisted reproductive technologies (ARTs). It concludes that under the best interpretation of the Supreme Court’s existing case law, states have ample room to regulate individuals’ decisions to procreate, including decisions to use ARTs. States, pursuant to their police powers, may regulate ARTs in order to protect the health, safety, and welfare of their citizens. However, courts will strictly scrutinize any regulation of procreation that distinguishes socially disfavored groups for different treatment. Similarly, even where a regulation would apply equally to all citizens, it must serve a legitimate governmental interest, rather than merely reflect “outmoded taboos.” A companion essay, Throwing the Baby Out with the Amniotic Fluid: Not All Reproductive Choices are Morally or Legally Equivalent, is available at http://ssrn.com/abstract=2127286. Two law review articles in progress, Rights To and Not To Procreate and Towards a Jurisprudence of Procreation, develop these ideas.
Bookmarks Related papers MentionsView impact
In the pursuit of truly evidence-based medicine (EBM) and the "learning health care system" that ... more In the pursuit of truly evidence-based medicine (EBM) and the "learning health care system" that the Institute of Medicine has called for, both bioethicists and federal regulators are, happily, rethinking the way that we govern both biomedical research and medical practice, as well as the sharp boundary that the field has assumed can and should exist between them.
Meanwhile, a parallel conversation is taking place among legal scholars, political scientists, lawmakers and others, who increasingly argue that all decisions affecting human welfare — and not just medical decisions — should, wherever possible, be based on sound evidence about the comparative effects of available alternatives. Participants in this second conversation, explicitly invoking EMB and the gold standard for producing the evidence base underlying it, the randomized controlled trial (RCT), argue for widespread experimentation in law and policy to effect evidence-based practice (EBP) across several "practice" domains, including legal services, education, criminal justice, housing, voting practices, welfare reform, tax law, and environmental regulation.
This Politics and Policy column, forthcoming in the Hastings Center Report, argues that these conversations should not be separate. The problem with the EBP conversation proceeding on its own is that most of its participants are unaware (perhaps blissfully so) of the elaborate regulatory apparatus that govern all manner of knowledge production; they assume that informed consent is the only potential ethical-legal obstacle to EBP. And the problem with the EBM conversation proceeding on its own is that doing so threatens a repeat of the late 1970s, when regulations and ethical norms that came to govern knowledge production involving all disciplines and methodologies were developed by a relatively insular group within biomedicine.
The column ends by sketching some of the questions that diverse decision-makers will have to confront as they move towards a world in which research is integrated into various practice areas.
Bookmarks Related papers MentionsView impact
Legal ethicists have long debated the circumstances, if any, under which a lawyer may ethically d... more Legal ethicists have long debated the circumstances, if any, under which a lawyer may ethically deviate from the traditional model of the client-attorney relationship by failing to zealously advocate for her client’s legally permissible goals. One camp (“the traditionalists”) has tended to insist that the lawyer, like the doctor, has an unwavering duty to single-mindedly dedicate herself to her client’s ends, lest she become a "double agent," while a second camp (“the critics”) has tended to argue that the primary aim of the lawyer should be justice.
This Note, which was cited as recommended reading in The Green Bag's Almanac of Exemplary Legal Writing of 2006, stakes out a middle ground in this debate about professional accountability and role morality by analogizing cause lawyers to physician-researchers.
I argue, against the traditionalists, that a lawyer need not single-mindedly pursue her client’s interests at the expense of her own values. Traditionalists often draw on analogies to the patient-doctor relationship to support the zealous advocacy model of lawyering. In fact, however, the human subject research that has been the mainstay of medical progress is a compromise between society’s need for that progress and a recognition of the physician as person, on the one hand, and the traditional understanding of the physician as fiduciary to (and, when therapy is scarce, zealous advocate for) her patients, on the other. The existence of a broad consensus both within the medical profession and in society at large that experimental research to which the patient-subject consents is ethical suggests that cause lawyering’s similar reprioritization of client and cause may also be ethical under similar circumstances.
Contrary to many critics, however, I argue that the client-attorney relationship must remain essentially client-centered. A lawyer who departs from the traditional model may not conscript her client into service to her cause without the client’s voluntary, informed consent — regardless of how just the cause is or how strongly the client does (or, in the lawyer’s view, should) identify with it. The attorney owes her client the opportunity to make a voluntary, informed decision about whether the client believes that the potential benefits of participating in the lawyer’s legal activism outweighs the risks.
In short, I argue that the real ethical problem of cause lawyering (and other nontraditional approaches) is not that of the double agent, but that of the secret agent — the lawyer who conceals from her client (whether consciously or not) her readiness to place her cause (or the common good, or the interests of other clients) above her client.
Bookmarks Related papers MentionsView impact
This short article examines the merits of one of the first decisions made under the NIH's new Gui... more This short article examines the merits of one of the first decisions made under the NIH's new Guidelines governing whether existing stem cell lines are eligible for federal funding (the origins of which I discuss here: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2103417).
Individuals undergoing in vitro fertilization (IVF) for reproductive purposes who find themselves with "spare" embryos after they have completed their families are often given the opportunity to donate those embryos rather than destroying them or donating them to another couple. Under the Guidelines, research using existing stem cell lines derived from such embryos is eligible for federal funding if the IVF patients gave their voluntary, informed consent to the donation. The NIH rejected several important, disease-specific cell lines because donors were required to waive any legal claims arising from the research, finding that the “use of exculpatory language . . . was inconsistent with the basic ethical principle of voluntary consent."
I consider the possibility that the waiver undermined the voluntariness of the donations by being coercive. Assuming, arguendo, that this is an apt characterization of the waiver, I argue that the appropriate response would have been to sever the waiver term from the rest of the agreement, not to invalidate the agreement as a whole, thus impeding both important science and the donors' intent.
I then defend a more provocative claim: namely, that the waiver was an ethically acceptable part of a take-it-or-leave-it offer which did not exploit the donors.
Bookmarks Related papers MentionsView impact
Bioethics increasingly relies on empirical research in order to resolve ethical issues. Bioethics... more Bioethics increasingly relies on empirical research in order to resolve ethical issues. Bioethics has also, of late, begun reimagining the researcher-subject relationship to include new positive duties to subjects, such as providing ancillary care and offering to return research results. These trends have converged in the form of a growing stream of empirical studies of subjects’ preferences about being offered research results. The legal and ethical question at the intersection of these trends is how we should factor these preferences into research design.
This short article takes a middle position on this question in the context of a proposed longitudinal study of genes, environment and health, in which the National Human Genome Research Institute hopes to enroll a cohort of 500,000, primarily through door-to-door recruitment. A survey found that most respondents would want to be offered the return of their individual genetic results. I reject both (1) the claim that evidence that subjects want to be offered results entails a duty by researchers to do so, but also (2) the paternalistic position that most individual results should not be returned.
I reach the first conclusion by arguing that the subject-researcher relationship in this case should be viewed as a sort of arms-length donative contract: an agreement under which the donor-subject agrees to make a gift to the donee-researcher, who arguably holds the gift in trust for society (especially given that the research is government-funded) and does not promise anything directly in return. A contract’s terms need not, and rarely do, reflect the ideal preferences of either party, and researchers may make offers to potential research volunteers that reflect their own preferences, resource limitations, and conscientious beliefs about the wisdom of returning certain results. The acceptance of what often must be, for pragmatic reasons, a take-it-or-leave-it offer to volunteer is not invalidated, nor the ensuing research rendered unethical, because the volunteer’s gift is not “repaid” in the form of research results or, indeed, in any form (though subjects must be told results will not be disclosed). Moreover, returning individual results in an ethical manner can be costly, and it would be odd (and perhaps not in keeping with subjects’ overall preferences, which most surveys are not designed to elicit or construct) if researchers had a duty to subjects that undercut their ability to devote the maximum available resources to the research itself — the very end that should motivate the subjects to volunteer in the first place. Finally, even if we assume that subjects own or are otherwise entitled to their genetic information, surely such property can be alienated, or such entitlements waived, through the donative contract. A contract’s terms, however, must not sink to the level of unconscionability.
Empirical data can, however, inform researchers who admirably want, and are able, to honor subjects’ preferences. Subjects have interests in receiving a much wider range of results than is often acknowledged, and most of the psychosocial risks they are said to face from disclosure remain speculative. Returning results may also help researchers recruit and retain subjects.
Bookmarks Related papers MentionsView impact
This concluding chapter co-authored with David Lazer in a volume (edited by Lazer) addressing the... more This concluding chapter co-authored with David Lazer in a volume (edited by Lazer) addressing the use of DNA evidence in the pretrial and posttrial phases of the criminal justice system identifies and analyzes areas where consensus does -- or should -- exist, as well as areas of disagreement.
Two primary areas of consensus exist. First, because DNA changes the meaning of time in the criminal justice system, the system's notion of finality must be rethought: evidence must be preserved, statutory criteria must be developed to govern postconviction access to and review of DNA evidence, and statutes of limitation on the introduction of new evidence should be lengthened or abolished. Second, DNA offender databases can be effective and legitimate tools for solving and preventing crimes.
In the area of posttrial relief, the areas of disagreement which we discuss include the appropriate gatekeepers of and criteria for postconviction review, the appropriate response to exculpatory and guilt-confirming results, and how to address the systemic flaws in the criminal justice system which DNA testing has unearthed. In the area of offender databases, we discuss debates about the appropriate criteria for direct inclusion in the database, the appropriateness of indirect inclusion of offenders' relatives through low-stringency searches by law enforcement, and how the database should be regulated.
Bookmarks Related papers MentionsView impact
In this short article, which is part of a broader project of rethinking the way we govern human s... more In this short article, which is part of a broader project of rethinking the way we govern human subjects research, I consider a proposal to improve the quantity, quality and speed of research by requiring subjects to contractually agree not to withdraw from research, at least without good reason. That proposal would require altering the current regulations governing human subjects research, which provide subjects with an inalienable right to withdraw from the research at any time, for any reason, without penalty or loss of benefit to which they would otherwise be entitled.
I argue that the gains sought by this particular proposal would likely be swamped by the costs of enforcing such contracts and by the chilling effect they would have on subject recruitment. But I agree that we should explore the role of contract in rethinking the way we govern human subjects research. Doing that in any significant way, however, requires an understanding of the subject-researcher relationship that is not fiduciary. In this article, I defend the claim that this relationship is in fact not fiduciary, cannot coherently be made fiduciary, and ought not to be fiduciary. I then defend the role of contract not only in the efficient conduct of research, but also, and more importantly, in serving the interests and welfare of research subjects.
Bookmarks Related papers MentionsView impact
Bookmarks Related papers MentionsView impact
This is a brief comment on the Iceland Supreme Court case of Guðmundsdóttir v. Iceland. In that ... more This is a brief comment on the Iceland Supreme Court case of Guðmundsdóttir v. Iceland. In that case, the daughter of a deceased man whose genetic, ancestral and health information were scheduled to be included in a national database successfully argued that including her father's information infringed her own privacy, since information about her could be inferred from "his" genetic information. I suggest that in the U.S., at least, it may make more sense to think about inherently familial genetic information through the lens of property law, with its ability to recognize and balance competing property interests in the same object, rather than privacy law, which tends to be fairly individualistic.
Bookmarks Related papers MentionsView impact
During the George W. Bush Administration, federally fundable human embryonic stem cell (hESC) res... more During the George W. Bush Administration, federally fundable human embryonic stem cell (hESC) research was limited to certain kinds of research on, at most, 21 existing stem cell lines of dubious quality. Many assumed that the Obama administration would usher in a sea change by expanding NIH support for hESC research and reducing the patchwork of state and federal regulations that governed it under the prior administration. As expected, President Obama signed an executive order revoking the Bush policy, and NIH issued implementing Guidelines for Human Stem Cell Research. In this article, we analyze the extent to which the Guidelines are likely to achieve two of their stated goals: “expand[ing] NIH support” for stem cell research and “ameliorat[ing]” the “patch- work” of standards that now govern it.
With respect to the goal of expanding federal hESC research, the Guidelines effect only incremental change in the scope of eligible research, preserving all of the Bush restrictions except the prohibition on funding research on new lines. Although that amendment expands — in theory, infinitely — the number of new eligible lines, much research is based on existing lines, and the number of those that will become eligible depends on how strictly NIH applies its detailed informed consent requirements. Finally, the significance of expansions in the fundability of hESC research is meaningful largely to the extent that such research is funded, and we predict that, compared to other funders, NIH’s funding will only modestly increase.
With respect to the goal of ameliorating the “patchwork” of standards governing U.S. stem cell research, although the Guidelines centralize crucial aspects of federal policy, and may exert influence even over non-NIH-supported researchers and other research funders and regulators, they almost certainly will not substantially reduce the multiple standards for conducting hESC research that exist in the U.S., much less in the world. Multiple funders and regulators, and so multiple sets of rules, will continue to exist, with none clearly dominant.
Our best guess for the short-term future of U.S. stem cell policy in the aftermath of the Guidelines, then, is that — for better or worse — it will look very much like the recent past.
Bookmarks Related papers MentionsView impact
Popular Press Essays, Articles, & Op-Eds by Michelle Meyer
This brief Observation analyzes the multiyear, multiround litigation of Sherley v. Sibelius, in w... more This brief Observation analyzes the multiyear, multiround litigation of Sherley v. Sibelius, in which plaintiffs, adult stem cell researchers, challenged the NIH policy of funding human embryonic stem cell research as in violation of the so-called Dickey-Wicker Amendment, an appropriations rider that prohibits HHS from funding "research in which" an embryo is destroyed.
Bookmarks Related papers MentionsView impact
New York Times "Gray Matter" essay on the importance of data-driven innovation in business.
Bookmarks Related papers MentionsView impact
A response at Forbes.com to Rebecca Skloot's NYT op-ed on proposed changes in the governance of s... more A response at Forbes.com to Rebecca Skloot's NYT op-ed on proposed changes in the governance of secondary research using biospecimens under the federal Common Rule.
Bookmarks Related papers MentionsView impact
Uploads
Journal Articles, Book Chapters, & White Papers by Michelle Meyer
and all three replicate. Estimated effects sizes are small (R2≈ 0.02%), approximately 1 month of schooling per allele. A linear polygenic score from all measured SNPs accounts for ≈ 2% of the variance in both educational attainment and cognitive function. Genes in the region of the loci have previously been associated with health, cognitive, and central nervous system phenotypes, and bioinformatics analyses suggest the involvement of the anterior caudate nucleus. These findings provide promising candidate SNPs for follow-up work, and our effect size estimates
can anchor power analyses in social-science genetics.
Most commonly, innovation is ad hoc and intuition-driven. Sometimes, a practitioner will attempt to rigorously determine the effects of a novel practice by comparing it to an alternative possible innovation or to the status quo. Not infrequently, if those subject to such A/B testing (as marketers and data scientists refer to it) or experimentation (as scientists in other fields call it) are fully informed about it, the results will be badly biased. In those cases, the practitioner may undertake the exercise more or less in secret, at least initially.
Practices that are subject to A/B testing generally have a far greater chance of being discovered to be unsafe or ineffective, potentially leading to substantial welfare gains. Yet the conventional wisdom is that “human experimentation” is inherently dangerous and human experimentation without informed consent is always unethical.
Facebook recently learned this lesson the hard way after the public learned about a 2012 experiment it conducted to determine the effects of News Feed, an innovation the company had launched in 2006 that marked a major shift in how now 1.44 billion people allocate their time and in the way they observe and interact with others. Academic studies have suggested two contradictory hypotheses about the risks of News Feed: that exposure to friends’ positive posts is psychologically risky (through a social comparison mechanism) and that exposure to negative posts is psychologically risky (through an emotional contagion mechanism). But these contradictory studies were mostly small and observational. The company alone was in a position to rigorously determine the mental health effects of its service, and to do so relatively cheaply. And for one week in January of 2012, it conducted an experiment in which it attempted to do just that.
How should we think about Facebook’s decision to conduct an experiment? Reaction was in fact swift and fierce. Criticism by both the public and some prominent ethicists centered on the fact that the 700,000 or so users involved had not consented to participate in what appeared to be a study designed to psychologically harm users by manipulating their emotions. Critics charged Facebook with exploiting its position of power over users, treating them as mere means to the corporation’s ends, and depriving them of information necessary for them to make a considered judgment about what was in their best interests. Some demanded federal and state investigations and retraction of the published results of the experiment.
But this considerable discussion paid scant attention to the experiment’s relationship to Facebook’s underlying practice of algorithmically curating users’ News Feeds and its risks and uncertainties which, after all, were imposed on 1.44 billion users without their knowledge or consent. In this article, using the Facebook emotional contagion experiment and, to a lesser extent, the OkCupid matching algorithm experiment, as case studies, I explore two frames through which we can think about these and similar corporate field experiments. The first frame is the familiar one used by ethicists and regulators to govern human subjects research. Contrary to popular belief, this frame, articulated in the Belmont Report and codified in the federal Common Rule, appropriately permits prima facie duties to obtain subjects’ informed consent to be overridden when obtaining consent would be infeasible and risks to subjects are no more than minimal — criteria, I argue, that there are good reasons to believe applied to the Facebook experiment.
The second frame contextualizes field experiments against the backdrop of the underlying practice they’re designed to study. Foregrounding the experimenter’s role as a practitioner, it asks how she ought to responsibly innovate and about the appropriate role of experiments in that innovation process. Experiments involving a tight fit between the population upon whom (no more than minimal) risks are imposed and the population that stands to benefit from the knowledge produced by a study may not only be ethically permissible; where they are conducted by the innovator who is both the proximate cause and cheapest avoider of any innovation-related costs, these experiments may be ethically laudable or even obligatory.
Like Rubin’s vase, where viewers vacillate between seeing a vase or two opposing faces in profile, each of these two frames becomes salient by bringing some aspect of the overall situation to the foreground: either the experiment or the practice whose effects it tests. But almost everyone saw the Facebook experiment through the first framework of human subjects research every time, and never through the second framework of responsible innovation. Why?
Using the OkCupid experiment as a mini case study, I dub the “A/B illusion” the widespread tendency to view a field experiment designed to study the effects of an existing or proposed practice as more morally suspicious than a practitioner’s alternative of immediately implementing an untested practice. The A/B illusion, which the Common Rule lamentably fosters, can cause us to overregulate research and underprotect a practice’s end-users. For instance, given probative but inconclusive evidence that News Feed psychologically harms users through exposure to negative and/or positive posts, Facebook’s unique position to establish the effects of its innovation, and the relative ease with which it could do so, most criticisms of the Facebook experiment reflect the A/B illusion and should be inverted: It is not the practitioner who engages in A/B testing but the practitioner who simply implements A who is more likely to exploit her position of power over users or employees, to treat them as mere means to the corporation’s ends, and to deprive them of information necessary for them to make a considered judgment about what is in their best interests.
In pressing for increased risk-based research regulation, the ANPRM cleverly exploits what is perhaps two camps of critics’ only common ground: a shared faith in the possibility of regulators making objectively “correct” risk-benefit assessments, and their too-frequent failure to do the same, as demonstrated in part by widespread variation in IRB decisions regarding similar and even identical protocols. (Here the two camps part ways, with one camp emphasizing Type I errors, in which “unreasonably risky” research is allowed to proceed, and the other emphasizing Type II errors, in which important but “low-risk” research is rejected, altered or delayed.) Like the opposing camps of critics it seeks to appease, risk-proportionate research regulation itself assumes a meaningful way for regulators to distinguish “low-” from “high-risk” research. That is, it requires a basis on which some social planner can, in advance and with respect to all prospective participants in a particular study or category of research, deem some research-related harms insufficiently probable or significant to warrant the full panoply of protections afforded participants in other studies.
This chapter calls that assumption into question. Because prospective research participants are heterogeneous in their preferences and other circumstances, the same protocol will offer a different risk-benefit profile for different participants. (IRB variation, then, should not surprise us in the least; much of it is likely simply a reflection of the fact that all individuals, including those who serve on IRBs, vary in their preferences regarding research risks, benefits, and trade-offs between the two.) Before discussing this challenge from participant heterogeneity, I provide an overview of risk-based regulation and discusses two other notable challenges in applying such a regulatory framework to human subjects research: heterogeneity among regulated research activities and regulator biases. I conclude by suggesting an alternative way to redistribute scarce regulatory resources that embraces, rather than ignores, all three challenges. Although I focus on U.S. governance of human subjects research, the analysis is more broadly applicable in light of the fact that the U.S. has essentially exported this system to many other nations.
Meanwhile, a parallel conversation is taking place among legal scholars, political scientists, lawmakers and others, who increasingly argue that all decisions affecting human welfare — and not just medical decisions — should, wherever possible, be based on sound evidence about the comparative effects of available alternatives. Participants in this second conversation, explicitly invoking EMB and the gold standard for producing the evidence base underlying it, the randomized controlled trial (RCT), argue for widespread experimentation in law and policy to effect evidence-based practice (EBP) across several "practice" domains, including legal services, education, criminal justice, housing, voting practices, welfare reform, tax law, and environmental regulation.
This Politics and Policy column, forthcoming in the Hastings Center Report, argues that these conversations should not be separate. The problem with the EBP conversation proceeding on its own is that most of its participants are unaware (perhaps blissfully so) of the elaborate regulatory apparatus that govern all manner of knowledge production; they assume that informed consent is the only potential ethical-legal obstacle to EBP. And the problem with the EBM conversation proceeding on its own is that doing so threatens a repeat of the late 1970s, when regulations and ethical norms that came to govern knowledge production involving all disciplines and methodologies were developed by a relatively insular group within biomedicine.
The column ends by sketching some of the questions that diverse decision-makers will have to confront as they move towards a world in which research is integrated into various practice areas.
This Note, which was cited as recommended reading in The Green Bag's Almanac of Exemplary Legal Writing of 2006, stakes out a middle ground in this debate about professional accountability and role morality by analogizing cause lawyers to physician-researchers.
I argue, against the traditionalists, that a lawyer need not single-mindedly pursue her client’s interests at the expense of her own values. Traditionalists often draw on analogies to the patient-doctor relationship to support the zealous advocacy model of lawyering. In fact, however, the human subject research that has been the mainstay of medical progress is a compromise between society’s need for that progress and a recognition of the physician as person, on the one hand, and the traditional understanding of the physician as fiduciary to (and, when therapy is scarce, zealous advocate for) her patients, on the other. The existence of a broad consensus both within the medical profession and in society at large that experimental research to which the patient-subject consents is ethical suggests that cause lawyering’s similar reprioritization of client and cause may also be ethical under similar circumstances.
Contrary to many critics, however, I argue that the client-attorney relationship must remain essentially client-centered. A lawyer who departs from the traditional model may not conscript her client into service to her cause without the client’s voluntary, informed consent — regardless of how just the cause is or how strongly the client does (or, in the lawyer’s view, should) identify with it. The attorney owes her client the opportunity to make a voluntary, informed decision about whether the client believes that the potential benefits of participating in the lawyer’s legal activism outweighs the risks.
In short, I argue that the real ethical problem of cause lawyering (and other nontraditional approaches) is not that of the double agent, but that of the secret agent — the lawyer who conceals from her client (whether consciously or not) her readiness to place her cause (or the common good, or the interests of other clients) above her client.
Individuals undergoing in vitro fertilization (IVF) for reproductive purposes who find themselves with "spare" embryos after they have completed their families are often given the opportunity to donate those embryos rather than destroying them or donating them to another couple. Under the Guidelines, research using existing stem cell lines derived from such embryos is eligible for federal funding if the IVF patients gave their voluntary, informed consent to the donation. The NIH rejected several important, disease-specific cell lines because donors were required to waive any legal claims arising from the research, finding that the “use of exculpatory language . . . was inconsistent with the basic ethical principle of voluntary consent."
I consider the possibility that the waiver undermined the voluntariness of the donations by being coercive. Assuming, arguendo, that this is an apt characterization of the waiver, I argue that the appropriate response would have been to sever the waiver term from the rest of the agreement, not to invalidate the agreement as a whole, thus impeding both important science and the donors' intent.
I then defend a more provocative claim: namely, that the waiver was an ethically acceptable part of a take-it-or-leave-it offer which did not exploit the donors.
This short article takes a middle position on this question in the context of a proposed longitudinal study of genes, environment and health, in which the National Human Genome Research Institute hopes to enroll a cohort of 500,000, primarily through door-to-door recruitment. A survey found that most respondents would want to be offered the return of their individual genetic results. I reject both (1) the claim that evidence that subjects want to be offered results entails a duty by researchers to do so, but also (2) the paternalistic position that most individual results should not be returned.
I reach the first conclusion by arguing that the subject-researcher relationship in this case should be viewed as a sort of arms-length donative contract: an agreement under which the donor-subject agrees to make a gift to the donee-researcher, who arguably holds the gift in trust for society (especially given that the research is government-funded) and does not promise anything directly in return. A contract’s terms need not, and rarely do, reflect the ideal preferences of either party, and researchers may make offers to potential research volunteers that reflect their own preferences, resource limitations, and conscientious beliefs about the wisdom of returning certain results. The acceptance of what often must be, for pragmatic reasons, a take-it-or-leave-it offer to volunteer is not invalidated, nor the ensuing research rendered unethical, because the volunteer’s gift is not “repaid” in the form of research results or, indeed, in any form (though subjects must be told results will not be disclosed). Moreover, returning individual results in an ethical manner can be costly, and it would be odd (and perhaps not in keeping with subjects’ overall preferences, which most surveys are not designed to elicit or construct) if researchers had a duty to subjects that undercut their ability to devote the maximum available resources to the research itself — the very end that should motivate the subjects to volunteer in the first place. Finally, even if we assume that subjects own or are otherwise entitled to their genetic information, surely such property can be alienated, or such entitlements waived, through the donative contract. A contract’s terms, however, must not sink to the level of unconscionability.
Empirical data can, however, inform researchers who admirably want, and are able, to honor subjects’ preferences. Subjects have interests in receiving a much wider range of results than is often acknowledged, and most of the psychosocial risks they are said to face from disclosure remain speculative. Returning results may also help researchers recruit and retain subjects.
Two primary areas of consensus exist. First, because DNA changes the meaning of time in the criminal justice system, the system's notion of finality must be rethought: evidence must be preserved, statutory criteria must be developed to govern postconviction access to and review of DNA evidence, and statutes of limitation on the introduction of new evidence should be lengthened or abolished. Second, DNA offender databases can be effective and legitimate tools for solving and preventing crimes.
In the area of posttrial relief, the areas of disagreement which we discuss include the appropriate gatekeepers of and criteria for postconviction review, the appropriate response to exculpatory and guilt-confirming results, and how to address the systemic flaws in the criminal justice system which DNA testing has unearthed. In the area of offender databases, we discuss debates about the appropriate criteria for direct inclusion in the database, the appropriateness of indirect inclusion of offenders' relatives through low-stringency searches by law enforcement, and how the database should be regulated.
I argue that the gains sought by this particular proposal would likely be swamped by the costs of enforcing such contracts and by the chilling effect they would have on subject recruitment. But I agree that we should explore the role of contract in rethinking the way we govern human subjects research. Doing that in any significant way, however, requires an understanding of the subject-researcher relationship that is not fiduciary. In this article, I defend the claim that this relationship is in fact not fiduciary, cannot coherently be made fiduciary, and ought not to be fiduciary. I then defend the role of contract not only in the efficient conduct of research, but also, and more importantly, in serving the interests and welfare of research subjects.
With respect to the goal of expanding federal hESC research, the Guidelines effect only incremental change in the scope of eligible research, preserving all of the Bush restrictions except the prohibition on funding research on new lines. Although that amendment expands — in theory, infinitely — the number of new eligible lines, much research is based on existing lines, and the number of those that will become eligible depends on how strictly NIH applies its detailed informed consent requirements. Finally, the significance of expansions in the fundability of hESC research is meaningful largely to the extent that such research is funded, and we predict that, compared to other funders, NIH’s funding will only modestly increase.
With respect to the goal of ameliorating the “patchwork” of standards governing U.S. stem cell research, although the Guidelines centralize crucial aspects of federal policy, and may exert influence even over non-NIH-supported researchers and other research funders and regulators, they almost certainly will not substantially reduce the multiple standards for conducting hESC research that exist in the U.S., much less in the world. Multiple funders and regulators, and so multiple sets of rules, will continue to exist, with none clearly dominant.
Our best guess for the short-term future of U.S. stem cell policy in the aftermath of the Guidelines, then, is that — for better or worse — it will look very much like the recent past.
Popular Press Essays, Articles, & Op-Eds by Michelle Meyer
and all three replicate. Estimated effects sizes are small (R2≈ 0.02%), approximately 1 month of schooling per allele. A linear polygenic score from all measured SNPs accounts for ≈ 2% of the variance in both educational attainment and cognitive function. Genes in the region of the loci have previously been associated with health, cognitive, and central nervous system phenotypes, and bioinformatics analyses suggest the involvement of the anterior caudate nucleus. These findings provide promising candidate SNPs for follow-up work, and our effect size estimates
can anchor power analyses in social-science genetics.
Most commonly, innovation is ad hoc and intuition-driven. Sometimes, a practitioner will attempt to rigorously determine the effects of a novel practice by comparing it to an alternative possible innovation or to the status quo. Not infrequently, if those subject to such A/B testing (as marketers and data scientists refer to it) or experimentation (as scientists in other fields call it) are fully informed about it, the results will be badly biased. In those cases, the practitioner may undertake the exercise more or less in secret, at least initially.
Practices that are subject to A/B testing generally have a far greater chance of being discovered to be unsafe or ineffective, potentially leading to substantial welfare gains. Yet the conventional wisdom is that “human experimentation” is inherently dangerous and human experimentation without informed consent is always unethical.
Facebook recently learned this lesson the hard way after the public learned about a 2012 experiment it conducted to determine the effects of News Feed, an innovation the company had launched in 2006 that marked a major shift in how now 1.44 billion people allocate their time and in the way they observe and interact with others. Academic studies have suggested two contradictory hypotheses about the risks of News Feed: that exposure to friends’ positive posts is psychologically risky (through a social comparison mechanism) and that exposure to negative posts is psychologically risky (through an emotional contagion mechanism). But these contradictory studies were mostly small and observational. The company alone was in a position to rigorously determine the mental health effects of its service, and to do so relatively cheaply. And for one week in January of 2012, it conducted an experiment in which it attempted to do just that.
How should we think about Facebook’s decision to conduct an experiment? Reaction was in fact swift and fierce. Criticism by both the public and some prominent ethicists centered on the fact that the 700,000 or so users involved had not consented to participate in what appeared to be a study designed to psychologically harm users by manipulating their emotions. Critics charged Facebook with exploiting its position of power over users, treating them as mere means to the corporation’s ends, and depriving them of information necessary for them to make a considered judgment about what was in their best interests. Some demanded federal and state investigations and retraction of the published results of the experiment.
But this considerable discussion paid scant attention to the experiment’s relationship to Facebook’s underlying practice of algorithmically curating users’ News Feeds and its risks and uncertainties which, after all, were imposed on 1.44 billion users without their knowledge or consent. In this article, using the Facebook emotional contagion experiment and, to a lesser extent, the OkCupid matching algorithm experiment, as case studies, I explore two frames through which we can think about these and similar corporate field experiments. The first frame is the familiar one used by ethicists and regulators to govern human subjects research. Contrary to popular belief, this frame, articulated in the Belmont Report and codified in the federal Common Rule, appropriately permits prima facie duties to obtain subjects’ informed consent to be overridden when obtaining consent would be infeasible and risks to subjects are no more than minimal — criteria, I argue, that there are good reasons to believe applied to the Facebook experiment.
The second frame contextualizes field experiments against the backdrop of the underlying practice they’re designed to study. Foregrounding the experimenter’s role as a practitioner, it asks how she ought to responsibly innovate and about the appropriate role of experiments in that innovation process. Experiments involving a tight fit between the population upon whom (no more than minimal) risks are imposed and the population that stands to benefit from the knowledge produced by a study may not only be ethically permissible; where they are conducted by the innovator who is both the proximate cause and cheapest avoider of any innovation-related costs, these experiments may be ethically laudable or even obligatory.
Like Rubin’s vase, where viewers vacillate between seeing a vase or two opposing faces in profile, each of these two frames becomes salient by bringing some aspect of the overall situation to the foreground: either the experiment or the practice whose effects it tests. But almost everyone saw the Facebook experiment through the first framework of human subjects research every time, and never through the second framework of responsible innovation. Why?
Using the OkCupid experiment as a mini case study, I dub the “A/B illusion” the widespread tendency to view a field experiment designed to study the effects of an existing or proposed practice as more morally suspicious than a practitioner’s alternative of immediately implementing an untested practice. The A/B illusion, which the Common Rule lamentably fosters, can cause us to overregulate research and underprotect a practice’s end-users. For instance, given probative but inconclusive evidence that News Feed psychologically harms users through exposure to negative and/or positive posts, Facebook’s unique position to establish the effects of its innovation, and the relative ease with which it could do so, most criticisms of the Facebook experiment reflect the A/B illusion and should be inverted: It is not the practitioner who engages in A/B testing but the practitioner who simply implements A who is more likely to exploit her position of power over users or employees, to treat them as mere means to the corporation’s ends, and to deprive them of information necessary for them to make a considered judgment about what is in their best interests.
In pressing for increased risk-based research regulation, the ANPRM cleverly exploits what is perhaps two camps of critics’ only common ground: a shared faith in the possibility of regulators making objectively “correct” risk-benefit assessments, and their too-frequent failure to do the same, as demonstrated in part by widespread variation in IRB decisions regarding similar and even identical protocols. (Here the two camps part ways, with one camp emphasizing Type I errors, in which “unreasonably risky” research is allowed to proceed, and the other emphasizing Type II errors, in which important but “low-risk” research is rejected, altered or delayed.) Like the opposing camps of critics it seeks to appease, risk-proportionate research regulation itself assumes a meaningful way for regulators to distinguish “low-” from “high-risk” research. That is, it requires a basis on which some social planner can, in advance and with respect to all prospective participants in a particular study or category of research, deem some research-related harms insufficiently probable or significant to warrant the full panoply of protections afforded participants in other studies.
This chapter calls that assumption into question. Because prospective research participants are heterogeneous in their preferences and other circumstances, the same protocol will offer a different risk-benefit profile for different participants. (IRB variation, then, should not surprise us in the least; much of it is likely simply a reflection of the fact that all individuals, including those who serve on IRBs, vary in their preferences regarding research risks, benefits, and trade-offs between the two.) Before discussing this challenge from participant heterogeneity, I provide an overview of risk-based regulation and discusses two other notable challenges in applying such a regulatory framework to human subjects research: heterogeneity among regulated research activities and regulator biases. I conclude by suggesting an alternative way to redistribute scarce regulatory resources that embraces, rather than ignores, all three challenges. Although I focus on U.S. governance of human subjects research, the analysis is more broadly applicable in light of the fact that the U.S. has essentially exported this system to many other nations.
Meanwhile, a parallel conversation is taking place among legal scholars, political scientists, lawmakers and others, who increasingly argue that all decisions affecting human welfare — and not just medical decisions — should, wherever possible, be based on sound evidence about the comparative effects of available alternatives. Participants in this second conversation, explicitly invoking EMB and the gold standard for producing the evidence base underlying it, the randomized controlled trial (RCT), argue for widespread experimentation in law and policy to effect evidence-based practice (EBP) across several "practice" domains, including legal services, education, criminal justice, housing, voting practices, welfare reform, tax law, and environmental regulation.
This Politics and Policy column, forthcoming in the Hastings Center Report, argues that these conversations should not be separate. The problem with the EBP conversation proceeding on its own is that most of its participants are unaware (perhaps blissfully so) of the elaborate regulatory apparatus that govern all manner of knowledge production; they assume that informed consent is the only potential ethical-legal obstacle to EBP. And the problem with the EBM conversation proceeding on its own is that doing so threatens a repeat of the late 1970s, when regulations and ethical norms that came to govern knowledge production involving all disciplines and methodologies were developed by a relatively insular group within biomedicine.
The column ends by sketching some of the questions that diverse decision-makers will have to confront as they move towards a world in which research is integrated into various practice areas.
This Note, which was cited as recommended reading in The Green Bag's Almanac of Exemplary Legal Writing of 2006, stakes out a middle ground in this debate about professional accountability and role morality by analogizing cause lawyers to physician-researchers.
I argue, against the traditionalists, that a lawyer need not single-mindedly pursue her client’s interests at the expense of her own values. Traditionalists often draw on analogies to the patient-doctor relationship to support the zealous advocacy model of lawyering. In fact, however, the human subject research that has been the mainstay of medical progress is a compromise between society’s need for that progress and a recognition of the physician as person, on the one hand, and the traditional understanding of the physician as fiduciary to (and, when therapy is scarce, zealous advocate for) her patients, on the other. The existence of a broad consensus both within the medical profession and in society at large that experimental research to which the patient-subject consents is ethical suggests that cause lawyering’s similar reprioritization of client and cause may also be ethical under similar circumstances.
Contrary to many critics, however, I argue that the client-attorney relationship must remain essentially client-centered. A lawyer who departs from the traditional model may not conscript her client into service to her cause without the client’s voluntary, informed consent — regardless of how just the cause is or how strongly the client does (or, in the lawyer’s view, should) identify with it. The attorney owes her client the opportunity to make a voluntary, informed decision about whether the client believes that the potential benefits of participating in the lawyer’s legal activism outweighs the risks.
In short, I argue that the real ethical problem of cause lawyering (and other nontraditional approaches) is not that of the double agent, but that of the secret agent — the lawyer who conceals from her client (whether consciously or not) her readiness to place her cause (or the common good, or the interests of other clients) above her client.
Individuals undergoing in vitro fertilization (IVF) for reproductive purposes who find themselves with "spare" embryos after they have completed their families are often given the opportunity to donate those embryos rather than destroying them or donating them to another couple. Under the Guidelines, research using existing stem cell lines derived from such embryos is eligible for federal funding if the IVF patients gave their voluntary, informed consent to the donation. The NIH rejected several important, disease-specific cell lines because donors were required to waive any legal claims arising from the research, finding that the “use of exculpatory language . . . was inconsistent with the basic ethical principle of voluntary consent."
I consider the possibility that the waiver undermined the voluntariness of the donations by being coercive. Assuming, arguendo, that this is an apt characterization of the waiver, I argue that the appropriate response would have been to sever the waiver term from the rest of the agreement, not to invalidate the agreement as a whole, thus impeding both important science and the donors' intent.
I then defend a more provocative claim: namely, that the waiver was an ethically acceptable part of a take-it-or-leave-it offer which did not exploit the donors.
This short article takes a middle position on this question in the context of a proposed longitudinal study of genes, environment and health, in which the National Human Genome Research Institute hopes to enroll a cohort of 500,000, primarily through door-to-door recruitment. A survey found that most respondents would want to be offered the return of their individual genetic results. I reject both (1) the claim that evidence that subjects want to be offered results entails a duty by researchers to do so, but also (2) the paternalistic position that most individual results should not be returned.
I reach the first conclusion by arguing that the subject-researcher relationship in this case should be viewed as a sort of arms-length donative contract: an agreement under which the donor-subject agrees to make a gift to the donee-researcher, who arguably holds the gift in trust for society (especially given that the research is government-funded) and does not promise anything directly in return. A contract’s terms need not, and rarely do, reflect the ideal preferences of either party, and researchers may make offers to potential research volunteers that reflect their own preferences, resource limitations, and conscientious beliefs about the wisdom of returning certain results. The acceptance of what often must be, for pragmatic reasons, a take-it-or-leave-it offer to volunteer is not invalidated, nor the ensuing research rendered unethical, because the volunteer’s gift is not “repaid” in the form of research results or, indeed, in any form (though subjects must be told results will not be disclosed). Moreover, returning individual results in an ethical manner can be costly, and it would be odd (and perhaps not in keeping with subjects’ overall preferences, which most surveys are not designed to elicit or construct) if researchers had a duty to subjects that undercut their ability to devote the maximum available resources to the research itself — the very end that should motivate the subjects to volunteer in the first place. Finally, even if we assume that subjects own or are otherwise entitled to their genetic information, surely such property can be alienated, or such entitlements waived, through the donative contract. A contract’s terms, however, must not sink to the level of unconscionability.
Empirical data can, however, inform researchers who admirably want, and are able, to honor subjects’ preferences. Subjects have interests in receiving a much wider range of results than is often acknowledged, and most of the psychosocial risks they are said to face from disclosure remain speculative. Returning results may also help researchers recruit and retain subjects.
Two primary areas of consensus exist. First, because DNA changes the meaning of time in the criminal justice system, the system's notion of finality must be rethought: evidence must be preserved, statutory criteria must be developed to govern postconviction access to and review of DNA evidence, and statutes of limitation on the introduction of new evidence should be lengthened or abolished. Second, DNA offender databases can be effective and legitimate tools for solving and preventing crimes.
In the area of posttrial relief, the areas of disagreement which we discuss include the appropriate gatekeepers of and criteria for postconviction review, the appropriate response to exculpatory and guilt-confirming results, and how to address the systemic flaws in the criminal justice system which DNA testing has unearthed. In the area of offender databases, we discuss debates about the appropriate criteria for direct inclusion in the database, the appropriateness of indirect inclusion of offenders' relatives through low-stringency searches by law enforcement, and how the database should be regulated.
I argue that the gains sought by this particular proposal would likely be swamped by the costs of enforcing such contracts and by the chilling effect they would have on subject recruitment. But I agree that we should explore the role of contract in rethinking the way we govern human subjects research. Doing that in any significant way, however, requires an understanding of the subject-researcher relationship that is not fiduciary. In this article, I defend the claim that this relationship is in fact not fiduciary, cannot coherently be made fiduciary, and ought not to be fiduciary. I then defend the role of contract not only in the efficient conduct of research, but also, and more importantly, in serving the interests and welfare of research subjects.
With respect to the goal of expanding federal hESC research, the Guidelines effect only incremental change in the scope of eligible research, preserving all of the Bush restrictions except the prohibition on funding research on new lines. Although that amendment expands — in theory, infinitely — the number of new eligible lines, much research is based on existing lines, and the number of those that will become eligible depends on how strictly NIH applies its detailed informed consent requirements. Finally, the significance of expansions in the fundability of hESC research is meaningful largely to the extent that such research is funded, and we predict that, compared to other funders, NIH’s funding will only modestly increase.
With respect to the goal of ameliorating the “patchwork” of standards governing U.S. stem cell research, although the Guidelines centralize crucial aspects of federal policy, and may exert influence even over non-NIH-supported researchers and other research funders and regulators, they almost certainly will not substantially reduce the multiple standards for conducting hESC research that exist in the U.S., much less in the world. Multiple funders and regulators, and so multiple sets of rules, will continue to exist, with none clearly dominant.
Our best guess for the short-term future of U.S. stem cell policy in the aftermath of the Guidelines, then, is that — for better or worse — it will look very much like the recent past.