Skip all menusSkip 
first menu
Français Government of Canada BioPortal    
Home Site Map News Room FAQ Search
cbac-cccb
Browse
Features
About Us
Meeting Minutes
Publications
Advice
Annual Reports
Consultations
Project Reports
Research
2005
2004
2003
2002
2001
2000
1999
Topics
Biotech Watch
News Room
Dialogue Tool
Glossary









Canadian Biotechnology Advisory Committee
Home Publications Research 2004

Biobank Research: The Conflict Between Privacy and Access Made Explicit

Prepared by Michael Yeo, PhD, Associate Professor
Department of Philosophy Laurentian University
(myeo@laurentian.ca)

for

The Canadian Biotechnology Advisory Council

February 10, 2004

Contents

Introduction

1 Privacy: An Ambiguous, Multifaceted and Contested Concept

  1. 2 Privacy and Its Other: The Demands of Access
    1. 2.1 Access Demands in Contemporary Society
    2. 2.2 Access Demands in the Health System
  1. 3 Access Demands and Biobank Research
    1. 3.1 The Peculiar Nature of Biobank Contents
    2. 3.2 Biological Secrets
    3.    Security Privacy: Risks and Harms
    4.    Self-Determination Privacy and Consent
      1. Consent and Historical Samples
      2. Consent and Prospective Collection
    5. 3.3 The Scale of Research and Biobanking
  1. 4 Privacy: The Potential for Confusion
    1. 4.1 Respect for Privacy and Protection of Privacy
    2. 4.2 Confidentiality
    3. 4.3 Loss of Privacy and Violation of Privacy
    4. 4.4. Justifiable Violation of Privacy

5 Privacy from the Perspective of Research Access

Conclusion: Toward Informed and Democratic Public Policy

  1. Appendix: Privacy Issues and Applications
    1. A Anonymous, Key-coded and De -identified Personal Information
    2. B Consent and Safeguards
    3. C Forms of Consent and Exceptions to Consent
    4. D Biobank Research, Privacy and Accountability

References

Introduction

The term “biobank” is a recent entry into our lexicon. Its emergence is marked by an increase in desire, need or demand for access to and collection of human genetic materials. Research has been front and centre in this phenomenon. Biobanking and research that is dependent upon access to biobanks (I shall refer to this simply as “biobank research”) have increased and proliferated in tandem.

Advances in biobank research are undoubtedly exciting, particularly as viewed through the lens of scientific and medical “progress.” However, they have given rise to some uneasiness and concern about attendant social, political and ethical issues. Issues concerning privacy have been especially challenging.

It is not my intention in this paper to inventory the plethora of specific privacy-related issues in biobank research. 1 I argue that how these specific issues are framed, debated and resolved is decisively informed by how privacy is conceived or represented. And privacy is a notoriously ambiguous, multifaceted and contested concept.

What is privacy? What counts as loss of privacy? What counts as violation of privacy? What is it to respect or to protect privacy? Why is privacy important? In what value or values is it grounded? With what values does privacy conflict? How should conflict between privacy and other values be resolved? These are unapologetically philosophical questions in the first instance (even if also ethical, social and political questions), and they hinge upon a concept that is especially elusive. We should therefore not be surprised if they do not have straightforward answers, or if such answers as immediately suggest themselves fail to withstand careful reflection or scrutiny. Nonetheless, these questions are unavoidable if we care about how — and how well — ethical issues in which the concept of privacy figures are framed, debated and resolved.

In this paper, I seek to add greater clarity to the concept of privacy and its meaning and significance as relates specifically to biobank research. My aim is to explicate the main values that are in play and at stake. This requires disentangling two main meanings of privacy and making explicit how biobank research is in tension (differently) with privacy in each of these senses. When this tension and the values that inform it are brought to explicitness, stakes come into view beyond the phenomenon of biobank research as involving specific sectors of the economy or of society. Everyone is a stakeholder, and inde ed right-holder, in what is ultimately at stake in the privacy issues concerning research and biobanks.

The paper proceeds as follows. I begin with a survey of definitional or interpretive issues concerning privacy. I distinguish two main meanings of privacy in policy discussion, each grounded in different values: “self-determination privacy” and “security privacy.” In the former case, respect for privacy translates in terms of permitting or enabling individuals to exercise choice or control with respect to their personal information. In the case of the latter, respect for privacy translates as non-maleficence — protection of persons from harms that could arise from access to their personal information.

In the second section, I argue that, increasingly, research on humans is oriented to access to personal information. It desires, needs or demands it. To this extent, it is fundamentally in tension, and potentially in conflict with, privacy. Privacy is an impediment to research, and research — biobank research, especially — is a threat to privacy. I situate this threat in the context of increasing data collection and linkages facilitated by information technologies, with particular attention to the field of health care.

Having sketched this broader context, in the third section I elaborate the phenomenon of biobank research with reference to three of its features that are especially problematic for privacy. I outline privacy concerns as they come into view — and come into view differently — from the perspective of self -determination privacy, on the one hand, and security privacy, on the other.

In the fourth section, I return to the distinction between self-determination privacy and security privacy to elaborate it in greater depth. I show how respect for privacy, protection of privacy, confidentiality, loss of privacy, violation of privacy and justifiable violation of privacy are construed differently depending on which of these aspects or meanings of privacy is held in view.

In the fifth section, I show that privacy is more or less threatening to research, depending on which meaning of privacy one brings to the fore. I describe what I believe is emerging as a dominant research access perspective, which I call the “benign steward perspective.” It has foremost in view access for beneficent purposes. This perspective conciliates the tension between access and privacy by representing privacy primarily in its security aspect or meaning, and accordingly construes the protection of privacy in terms of non-maleficence owed toward “data subjects.” Its privacy emphasis is therefore on safeguards to protect data subjects from harms that could rebound to them in consequence of access to their data. At the same time, it downplays privacy as self -determination, and indeed obscures this aspect or meaning of privacy to the extent it reduces privacy to security, and respect for privacy to safeguards to protect data subjects from harm.

In concluding, I indicate certain challenges to be met if the privacy issues posed by research and biobanking are to be framed, debated and resolved in a manner as properly public, transparent and informed as warranted, given what is at stake.

The paper includes an appendix that has a sustained discussion of several issues too technical for a general purpose paper, as this is intended to be. These issues are:

  • anonymous, key-coded and de-identified personal information
  • consent and safeguards
  • consent and safeguards
  • biobank research, privacy and accountability.

Readers may wish to refer to the appendix as the following issues are raised in the text.

Top

1 Privacy: An Ambiguous, Multifaceted and Contested Concept

Normative issues are often embedded in definitional and interpretive ones, thus turning on how key terms are understood. Discussion about issues may be confused when different parties work from different understandings and assumptions. This is especially true of privacy, which concept is notoriously ambiguous, multifaceted and contested.

Many definitions of privacy can be drawn from the scholarly literature.2 The most famous and influential is that of Warren and Brandeis (2001/1890, p. 278), who define privacy as a species of the “right to be let alone.” Their account is noteworthy not only for its assertion of privacy as a right but also because they specifically ground privacy in “inviolate personality” (p. 278), which resonates with terms like “self-determination” and “autonomy.” Alan Westin (1984, p. 7), the “father” of contemporary privacy studies, carries this line of thinking forward in defining privacy as “the claim of individuals, groups, or institutions to determine for themselves when, how and to what extent information about them is communicated to others.” Fried (1984, p. 206) defines privacy along similar lines as “the control we have over information about ourselves.”

Although the grounding of privacy in something like self-determination or control is pervasive in the scholarly literature, it is not unchallenged. For example, Gavison (1984) contests the incorporation of norms such as “self -determination” or “control” in the definition of privacy. She defines privacy in normatively neutral terms as non-accessibility to others, which she breaks down into three dimensions: “the extent to which we are known to others, the extent to which others have physical access to us and the extent to which we are the subject of others’ attention” (p. 347). These three dimensions, Gavison tells us, correspond approximately to “secrecy,” “solitude” and “anonymity,” respectively. On her account, why non-access (i.e., privacy) in any of these three dimensions should be valued and whether the ability to exercise control over access is a good thing are questions to be settled independently of the definition of privacy itself.

Whether Gavison’s definitional point is correct or not, normative questions about privacy must necessarily move in the arena of values, as she is fully aware. There are many views in the literature about why privacy should be valued and what importance it has. Some think its importance is overrated, others that it is not rated highly enough. Some think it is intrinsically valuable; others that its value is a function of other interests. Most view its importance with reference to autonomy and respect for persons. Some view it as important for quite different reasons, such as its indispensability for the formation of intimate or trust relationships, the maintenance of social roles, sense of self, democracy, or for various combinations of these reasons. Othe rs focus on its negative significance, such as shielding scoundrels from detection or scrutiny, the promotion of anti-social impulses and the approval of selfishness. Some see privacy primarily as an individual good, others as a social good, and yet others as not much of a good at all. Some specify it as a right, others as a mere interest or good of some sort or other.

There is debate about the overlap between privacy and concepts such as ownership, autonomy, dignity, trespass, intrusion, intimacy, anonymity, secrecy, security, solitude, inviolate personality, etc. Scholars disagree about reasonable expectations of privacy, about what should count as private, and about the boundary between public and private. The scope of the law of privacy vis - à-vis the la w of defamation, trespass, intellectual property, free speech, etc., is much contested. There is debate about how best to preserve or protect privacy — whether by consumer choice and vigilance, voluntary codes and self-regulation, law, privacy-enhancing technologies or some combination of these.

Some writers think that privacy has some core, essential, unitary meaning; others that the concept gathers meanings related not so much by essence as by overlapping, family resemblances. As noted, Gavison (1984) distinguishes three main dimensions or forms of privacy: secrecy, solitude and anonymity. McLean (1995) distinguishes four forms: access-control privacy, room to grow privacy, safety valve privacy and respect privacy. Allen (1997) also distinguishes four for ms, albeit different ones: physical privacy, informational privacy, proprietary privacy and decisional privacy.

Adding to this mix, there is the concept of group privacy. Information about me, and genetic information in particular, may also reveal, or eve n be taken falsely to reveal, information about others. It may contribute to a composite picture of some group to which I belong (or even falsely am inferred to belong), which picture may adversely affect the interests of others in that group, whether I am adversely affected or not.

The concept of privacy has been dogged by definitional and interpretive issues since Warren and Brandeis released their monumental article. The new technology that precipitated and shaped their seminal analysis was the camera, particularly as wielded by the press. However, information technologies, and changes in the scope and intensity of information collection and sharing, have increased the complexity of the definitional issues. Virtual identities, click-stream data, composite data linked together and mined from diverse databases, and the translation of bodily substance into digital information — even when the information in question is “anonymous” or “de -identified” — give rise to puzzles about personal identity, what counts as personal information, and what norms should apply to the protection of this information. The adequacy of accounts of privacy that have not been fashioned with such developments in mind is in question.3

Recent technological developments have been anticipated in some measure by the relatively recent notion of “information privacy.” Fair information principles were initially elaborated in a horizon of worries about computerization (U.S. Department of Health, Education and Welfare, 1973).4 These principles have evolved into elaborate data protection regimes in technological societies. For example, the Canadian Standards Association’s (1996) Model Code for the Protection of Personal Information casts a broad net to include not only the individual’s ability to control personal information collection, use and disclosure, but also measures to safeguard the information and provisions for holding data custodians accountable.

Whether the development of fair information principles marks an advance is debatable. Regardless, they are widely enshrined in law and policy and have come to dominate and shape the discussion of privacy issues in contemporary society, including those having to do with biobank research. For present purposes, therefore, it is useful to concentrate on how privacy, as a matter of fact, is conceived or represented in fair information principles and in policy discussion based on those principles.

It is possible to distinguish two main aspects or meanings of privacy — each grounded in different values — as privacy is conceived or represented in fair information principles. The one that has been the most essentially connected to privacy in the scholarly literature I call “self - determination privacy.” The value in which it is grounded is something like respect for persons or autonomy. I call the other aspect “security privacy.” The primary value in which it is grounded is security as concerns the potential that one may be adversely affected in consequence of access to, or use of, one’s personal information. This meaning of privacy has come into prominence largely in consequence of the vogue of fair information principles (which give expression to it) in law and policy.

To be sure, this distinction is contestable, especially if viewed in light of the rather long list of definitions and related distinctions that can be found in the scholarly literature on privacy. For present purposes, it is enough if each of these aspects of privacy can be discerned as operative in contemporary discussion of normative issues concerning privacy and biobank research and if distinguishing between them helps us to better understand the issues and make explicit the values at stake. I hope to convince the reader not only that the distinction serves these purposes, but also that, if this distinction is not made, it is likely that confusion, if not obfuscation, will prevail as concerns how specific issues are framed, debated and resolved.

I elaborate this distinction in greater depth and demonstrate its importance in Section 5 of the paper. Before doing so, I detour through a discussion of what might be called the “other” of privacy, namely, access. I describe developments on the side or in the name of access that threaten privacy, and in turn to which privacy is a threat. In Section 3, I consider the broader social context in which “demand” for access to personal information is increasing. In Section 4, I turn to biobank research and the tension between access and privacy as it arises in this field. Explicating the tension between privacy and access demands in these contexts should in some measure prove the usefulness and validity of the distinction between self-determination and security privacy as provisionally elaborated above, and prepare the way for a more sustained elaboration in Section 5.

Top

2 Privacy and Its Other: The Demands of Access

The concept of privacy can also be elucidated by juxtaposing it against its opposite. However privacy is construed, it is in tension — and in some measure conflict — with access. I use the term “access” broadly to include collecting or assembling existing personal information and generating new information, whether directly from the persons concerned or from existing information about them.

If one considers personal information from the interest in keeping it private — safeguarding it, limiting access to it and so on — it comes into view as something that may be threatened. However, if one considers it from the interest in access — with reference to the interests of third parties in accessing it for purposes that are not those of the “data subject” — it comes into view as a resource. In turn, privacy, as a limitation of access, comes into view as a barrier. From the perspective of privacy, access is a threat. From the perspective of access, privacy is an impediment.

Research is oriented toward access to personal information. Privacy, to the extent it is associated with limiting research access, comes into view as an obstacle or impediment. This observation is stark and may seem grating. However, if one does not begin with a frank acknowledgment of the fundamental tension between privacy and research, there can be little hope of framing, debating and resolving issues fairly and explicitly. 5

Freeman and Robbins (1999, p. 327) observe: “So often, health experts have acknowledged the value of privacy protections and then argued passionately that their particular area must be treated as an exception.” The research interest in access to personal information is but one among many interests that threaten privacy today, and researchers join a much larger chorus of voices who express their case for access along the following lines:

Privacy is very important. However I, or the group or interest I represent, require access to certain information in order to accomplish my very important purpose, which purpose is in the public interest. I seek therefore to exempt myself from the requirement of consent or other obstacles that may impede my purpose, or in the alternative to ensure that whatever requirements are imposed upon me are specified in such a way as not to impede me in my purpose.

We risk missing the forest for the trees if we do not situate the research interest in access within the broader context of increasing information demands in contemporary society. 6 Commentators speak ominously of the “end of privacy.” 7 If privacy is indeed dying, and I am doubtful of this, its death is not likely to be in consequence of a single fatal blow, but something more like the death of a thousand cuts.

2.1 Access Demands in Contemporary Society

A glance through recent headlines indicates a range of access threats to privacy: workplace monitoring, drug-testing, locker searches, surveillance cameras, databases, identify theft, datamining, racial profiling, retinal scans, bar codes, microchips implants, radio frequency identification tags, identification numbers, biometrics, hacking, surveys, spyware, personalized marketing, cookies, Web bugs and so on.

The purposes for which access is sought are several and diverse: crime prevention and investigation, the administration of justice, the war against terrorism, the war against drugs, the war against poverty, the war against disease and a host of other wars, selling or providing products or services, ensuring accountability, ensuring security and public safety, the right to know, transparency, sexual prurience, public health and so on. Behind these purposes stand a large gallery of persons and organizations interested in access to us, or information about us: insurers, employers, marketers, police, criminals, government agencies, statistical agencies, charities, religious groups and so on.

A wide range of privacy concerns about access can be enumerated. There are concerns about consequential harms such as discrimination, identity theft, career impediment, denial of insurance, embarrassment and humiliation. There are also concerns of a quite different sort about loss of rights with respect to autonomy, individual or collective, and about the conversion of persons or communities into “data subjects” (“data objects” would be a more accurate term) to be used for purposes or ends that they have not set for themselves. If we consider access demands not individually or in isolation but in total, a variety of concerns that are more non-specific or global come into view. A partial list of relevant keywords includes: the surveillance society, dataveillance, social sorting, social control, social engineering, commodification, Big Brother, healthism, collectivism, imperialism and so on.8

2.2 Access Demands in the Health System

Biobanks, although not restricted to the health context, are primarily phenomena related to health care.9 Of all the fields of privacy concern — from employee privacy to privacy in commerce — the health system has emerged as one of the most contentious.

Computerization has contributed significantly to increased demand for health information. By rendering it into a form whereby it is easier and cheaper to reproduce, process and share, it makes it more useful and valuable — thus fuelling demand. 10

Emerging policy goals like population health and the prevention of adverse events have also contributed to increased access demands, as has the creation of the Canadian Institutes for Health Research (CIHR) and other recent changes to research funding and infrastructure. Efforts to improve the health system and manage it more effectively are information intensive. More information is sought in order to promote accountability. The initiatives and developments outlined below give some indication of the magnitude of recent changes that are access intensive and arouse what has been called “data lust”:

  • the creation of health information systems and networks and of a “Health Infoway” across the country to facilitate greater access to, and sharing of, information currently scattered in various repositories or “silos”
  • the movement toward comprehensive electronic health records
  • the proliferation of databases linking information from a variety of sources, including registries for various diseases and conditions
  • a trend to public/private partnerships in connection with health information systems and, related to this, increasing commercialization of health information
  • the emergence of the internet and its potential for the extensive collection of health related information, sometimes quite surreptitiously
  • the reliance of health reform initiatives (e.g., primary care) on access to information and the prominence of information technology in these initiatives
  • developments in the field of genetics and genomics and increased interest in access to genetic samples and information for a variety of purposes.

In all of this, there is a trend toward information linkages: piecing information together from diverse “data sources” or “silos” to form composite, more revealing pictures of individuals and populations. Research in population health, including population genetics and research on broad determinants of health, exemplifies this trend. Often, it relies upon the extensive collection of information that is not health information as such, but may be as sensitive, such as pertains to lifestyle, financial situation and personal relationships.

Health information collection is also increasingly indirect. Before computerization, there was much less value in the information for purposes secondary to the provision of care. 11 Scattered as it was in filing cabinets, the human and financial cost of accessing or assembling it was prohibitive. With computerization, the costs of access and collection are greatly reduced. Information collected for clinical care is increasingly sought for secondary purposes, such as research. 12

Thus has developed a rising tension between the access need, desire or demand for health information, on the one hand, and the interest or right of patients, citizens or communities in controlling the collection, use and disclosure of their health information, on the other.13

Top

3 Access Demands and Biobank Research

For the purpose of this paper, the term biobank designates “a collection of physical specimens from which DNA can be derived, the data derived from the DNA samples, or both.”14 This definition is broad and inclusive. 15 It encompasses a wide range of collections, including not just those specifically instituted as biobanks but also:16

  • pathology samples in hospitals
  • reproductive materials in assisted reproduction clinics
  • various body substances that may be collected for forensic purposes
  • blood collected at donor clinics and processed by blood agencies
  • specimens sent to medical laboratories for clinical testing
  • specimens obtained directly or indirectly by researchers.

Applying the second part of the definition — data derived from DNA samples — the capture is even broader and includes transcribed interpretive or descriptive data derived from DNA, such as may be included in health records held by a variety of institutions.

Biobank research inherits the privacy concerns that apply to research in the health context in general, particularly as biobank information is linked with information databases from the health system.17 Accordingly, the norms of research ethics, to the extent that those norms have been developed to apply more narrowly to researchers and research projects, do not adequately capture privacy-related issues concerning research and biobanking. It becomes less the ethics of the researcher per se that are at issue than the ethics of the organization or institution or, indeed, the overall research infrastructure itself. In consequence, other normative frameworks, such as business ethics and fair information principles for organizations and databases, are needed to ensure appropriate accountability and oversight.In addition, it has certain somewhat unique features that threaten privacy to an even greater extent. These features include the nature of biobank contents, the enhanced ability to reveal biological secrets, and the scale of biobank research and access demands for this purpose.18

3.1 The Peculiar Nature of Biobank Contents

The contents of biobanks (body parts, tissues, cells, extracted DNA, etc.) have a quality that sets them apart from other types of information. It is not just that they are more intimate, revealing or sensitive. They are not only about me, in the way that a diagnosis or a photograph is, but also in some sense are (were?) me, or about me in a sense quite different from other information. Moreover, because others besides the one from whom a sample has been taken may be implicated in that sample, it may be necessary to speak not just of “me” or “mine,” but also of “us” or “ours” — if indeed the concept of ownership (and related concepts like “intellectual property”) is appropriate at all.

The precise qualities of relevance are difficult to articulate. Here we are in the realm not simply of nature, but also of culture. The meaning of biobank contents understood in cultural terms is not reducible to biology or informatics, or even amenable to analysis within these disciplines. This is more apparent in some cultures than in others. For example, aboriginal geneticist Dr. Frank Dukepoo claims: “To us, any part of ourselves is sacred. Scientists say it’s just DNA. For an Indian, it is not just DNA, it’s part of a person, it is sacred, with deep religious significance. It is part of the essence of a person.”19

There is something more here than “specimens” as fixed in the ontological gaze of the scientist, even if it comes into view differently for different cultures. This “something more” is significant for deciding what norms should apply to the collection, use and sharing of these contents, particularly those norms having to do with respect, consent, ownership, or even something as unamenable to science as “the sacred.” Moreover, it is not just the genealogy of these contents from my body that perplexes and confounds the simple application of available norms.20 Even when physical DNA has been transcribed into digitalized information, and the originating links with my body severed, the information copied is, or may be, uniquely about me (or us).

How we construe biobank contents — beyond their meaning as fixed by genetics or informatics — will make a difference as concerns how we assess the sensitivity of this information and applicable norms. No doubt one could go astray by making too much of the biological substrate of our identities as persons and communities. Certainly persons and communities are not reducible to their genes. However, neither are our identities free of corporeal roots in the body. One could also go astray by making too little of this.

3.2 Biological Secrets

Annas (1993, p. 2348) argues that, if the medical record can be analogized to a diary, a molecule is like a “future diary.” The secrets locked in this diary pertain not simply to what has been or now is, but also in some measure to what will or could be. Moreover, this future diary is not just about me, but also about us — the families and communities whence we come and to whom in some sense we belong. 21 I will elaborate key privacy threats arising from access to this future diary with reference to the two main aspects of privacy previously distinguished — security and self-determination — as either may be threatened by access to our biological secrets.

Security Privacy: Risks and Harms

Security privacy in connection with biological secrets directs our attention to the potential that these secrets, once known (or known to certain others), could adversely affect our interests or otherwise cause us harm.22 Some risks, such as the potential for discrimination or denial of insurance coverage, are quite tangible, and can be managed relatively easily by access restrictions. Others, such as the potential for the information — even if specific individuals are not identified — to adversely affect a group or community, are more difficult to quantify and manage. Yet other risks are so indirect and speculative as to be virtually impossible to assess, such as the potential that precedents arising in biobank research regarding commercialization or intellectual property could adversely impact upon society, or that medical advances enabled by this research could diminish rather than enhance the health system, or lead to greater inequalities and so on.

The magnitude of the risks enabled by the extensive and concentrated collection of genetic infor mation can be very great (even if their probability is very low). 23 A plausible story can be told about how research and biobanks can usher in a new world of hope and improved quality of life. Another story can be told according to which this field of dreams becomes a nightmare: if you build it, they will come.

There is a range of risks, which may be more or less probable, or of lesser or greater magnitude. Reasonable people may disagree in their assessment of these risks. Questions thus arise about who should assess these risks, and whether individuals and communities should be empowered to assess and decide whether to bear these risks for themselves or whether others (and who and by what right?) should make this decision for them. Questions likewise arise concerning how these risks should be communicated to those who bear them — what risks to communicate and how framed as against what benefits — both in the context of individual consent and informed public choice. In this regard, risks of an entirely different order come into play, such as the potential for the communication of risks and benefits to degenerate into manipulation.

To be sure, risks and harms can (and should) be minimized with safeguards such as access controls, encryption, audit trails and so-called “privacy-enhancing technologies.” However, safeguards vary in effectiveness and can fail for any number of reasons. And the rules of the game may change, or be changed, down the road. Moreover, the attention to risks and to safeguards to reduce these risks — as important as this is — does not fully address such privacy concerns as arise from privacy conceived as self-determination.

Top

Self-determination Privacy and Consent

Privacy concerns are not exhausted by, or reducible to, concerns about harms and interests, even if we understand these broadly to include not just individual harms but also group harms, and not only physical harms but also psychological or social harms. Even if (as is unlikely) the risk of harm were zero, the very fact of access, collection or use is problematic to the extent it occurs in a manner disrespectful of persons (or communities) and their moral rights or claims over their information. This is what the distinction between self -determination privacy and security privacy is intended to express.

Consent is an important element of privacy, and important for reasons other than ensuring that I am protected from consequential harms.24 Accessing, using or sharing my information without my consent, or doing so under a consent given as a result of manipulation, deceit or fraud, wrongs me even if it does not harm me. This wrong may be further compounded when the information in question has been protected under a promise of confidentiality that is also violated.

A requirement for consent can frustrate information access for various purposes. The need (or desire) for data creates a temptation to gerrymander consent requirements to accommodate this interest (as occurs quite frequently in the world of commerce), rather than accommodating this interest to a bona fide consent requirement. The potential threat of biobank research to the principle of consent (or to self -determination privacy) may be greater reason for concern than the risk of being harmed or adversely affected.

Consent and Historical Samples

Much information of interest to biobank research has been collected for some other purpose. When consent for the original purpose does not authorize desired secondary research access, the requirement for consent is obviously an impediment. In some instances, it may be feasible to obtain a new consent to authorize the new research purpose. In others, this may be deemed impractical or even impossible. In the latter event, either the secondary research collection does not occur, or it occurs without consent. If it occurs without consent, the subject whose data are in question may never know.

Harry et al. (2000, p. 23) describe the problem as follows:

The immortalized cell lines can be stored in various gene banks around the world. Control and monitoring of samples is a critical issue, and it is very difficult to prevent abuses such as samples being used beyond the original intent. It is almost impossible to tell who is using them and for what purpose. Additionally, DNA can be extracted from tissues and blood. Once the DNA is extracted, it is frozen and is stored for years. Again, these samples can be transported to several different labs without the consent of the donor and used for studies beyond their original intent.

Policy statements sometimes conflict in their advice in these matters, or express their advice in terms sufficiently ambiguous for biobanks or researchers to take permissive license.25 There is reason to believe that nonconsensual access to historical samples (whether with the blessing of policy or not) is widespread and occurs quite frequently. 26 Weir and Horton (1995) find in a study that “about half of the informed consent documents analysed contain no mention of the possibility of secondary use of stored tissue samples, and just over ten percent mentioned third party access.” In his paper for the National Bioethics Advisory Commission, Weir (2000, p. F- 18) refers to “the widespread practice of secondary research done with stored tissue samples or immortalized cell lines that differs in purpose [from that] described in the original consent documents.” He adds that “post-diagnostic research with clinical samples . . . frequently occurs, especially in teaching hospitals . . . largely unknown to patients” (F-19).

Consent and Prospective Collection

In recognition of the issues raised by indirect collection without consent, the large-scale programmatic research biobanks that are emerging in recent years, such as the Estonian Biobank and the UK Biobank, tend to rely upon some form of research-authorizing consent for prospectively collecting new samples, whether this initial collection is directly for research purposes or not.

A consent requirement for research access to new samples or information (i.e., research access is permissible only if, and as, authorized by a consent) is less of an impediment. Authorization for research access can be specified as necessary in a consent at the point of initial collection (regardless of the primary purpose for collection).

The problem that arises here is that the “need” for samples or information creates an incentive to construct the consent or the consent process in such a way as to secure the required consent for the research. The greater the need, the greater the incentive. Consent obtained in view of this dynamic may have less to do with respect for persons than with clearing the consent hurdle in order to obtain the information needed (or desired). To the extent this involves anything like manipulation or deceit it may be even more disrespectful than simply taking the information without permission.

An instructive case that received national media attention in Canada concerned samples collected by a dermatologist in Newfoundland. Abraham (1998) reports that Dr. Wayne Gulliver provided samples collected from his patients to the biotechnology company Chiroscience under contract. The yield from this project is estimated to be between $4 billion and $10 billion. Dr. Gulliver obtained consent from his patients, but how informed was it? The consent form “does not explain the study’s commercial potential, that the patient relinquishes rights to the information contained in their genes, that their genes could be patented and how much money is expected to flow from the patent.” Moreover, given that the collection occurred in the context of clinical care, it could be that these patients did not fully appreciate what was required as part of their clinical care versus for research purposes. According to Abraham, the Memorial University Research Ethics Board “approved the deal,” but the scope of its review is not clear from her report.

The use of “presumed consent” in the Icelandic context is also troubling along these lines.27 It appears that it was chosen as a consent model not out of respect for persons or because of its privacy-promoting virtues but because, among possible models, it is most suitable for bringing in the desired data. Due to the extensive protest and challenge presumed consent received in Iceland, it seems unlikely that other national initiatives or large-scale databases will adopt it as a precedent. However, it may be less instructive as a possible precedent than it is for making explicit the conflict between the research access interest and self-determination privacy and exposing the willingness of the research interest to shape a consent requirement to meet its ends. If presumed consent has become a political non-starter since the Iceland case (which may be debated), we can expect that other dubious models of consent will be tried elsewhere.

Top

3.3 The Scale of Research and Biobanking

The sorts of issues and concerns discussed above apply generically to genetic research involving humans. However, they are more pronounced as applies specifically to population-based genetics and to research biobanking, to the extent they involve more extensive collection from a larger population for more broadly defined research purposes (as do the large -scale population biobanks that are emerging in Iceland, Estonia, the United Kingdom and elsewhere).28 The following items indicate the extent of the scale:

  • Eiseman’s (2000, p. D-38) “conservative estimate” is that “there is a total of more than 282 million specimens from more than 176.5 million cases of stored tissue in the United States, with cases accumulating at a rate of more than 20 million per year.”
  • Kaiser (2002) presents a table of proposed databases, with sample sizes ranging from 40 000 to 1 000 000 and budgets from $19 million to $212 million.
  • The draft protocol for the UK Biobank (2002) forecasts a cohort “of at least 500 000 men and women from the United Kingdom population aged 45–69.”
  • Uehling (2003) reports on Estonia’s plans to “enlist hundreds of thousands of Estonians — a big share of the nation’s population of 1.4 million.” He estimates that, by “the end of the project, a million people could be in the eGeen database.” As expressed by an official quoted in Uehling’s piece, “It is a major undertaking” to get a “number of patients in a database” sufficient to meet the targets required for their planned studies.

Scale can be significant for a variety of reasons, including the following:

  • The more extensive the collection within a population, the more the collection becomes an issue not only for the individuals within the population but also for the population as a whole, whether as a group, community or nation.
  • The more extensive the collection — including linking information from diverse sources — the greater the loss of privacy.
  • The more extensive the collection (and correspondingly, the larger the biobank infrastructure), the more challenging it may become to map the information flow as necessary for purposes of consent and transparency and to safeguard the information against certain risks.
  • The larger the scale, the larger the potential impact upon the society, whether cultural, social, economic, democratic, etc., and the more challenging to specify and safeguard against such risks.
  • The greater the stake of the population (nation, community or group), the greater its claim to representation in decision-making, and the less sufficient (even if necessary) is individual consent.
  • The larger the number of data subjects needed for the research biobank, the greater the challenge may be for recruitment, which increases the incentive to compromise consent.
  • In many cases, the collection is not in the service of a discrete research question or even set of questions, but rather something more like a research program. Lack of specificity or open-endedness about the potential for new research uses comes up against informed consent if consent is interpreted to require specific information about future uses.
  • The larger the scale, the greater the cost and need for financing, which may divert scarce resources from other worthy endeavour s, or give rise to partnerships and commercialization arrangements inconsistent with the values of members of the research population.
  • The larger the scale, the more difficult it may be to ensure data security (e.g., when large numbers of people have the opportunity for access.

Many of these novel challenges, and the inadequacy of existing ethical frameworks to address them, have been noted with specific reference to population genetics, particularly as concerns obligations not just to individuals but to groups or communities (Harry, Howard and Shelton, 2000, p. 19):

It has become evident that this new area of science and technology poses new challenges with regard to existing ethical practices. Current bioethical protocols fail to address the unique conditions raised by population-based research, in particular with respect to processes for group decision making and cultural worldviews. Genetic variation research is population-based research, but most ethical guidelines do not address group rights.

The concept of “group privacy” comprehends rights, moral claims or interests that transcend those of any particular member of a group in information that is from, or about, not just an individual but a group. Members of a community may have an interest in keeping things private with respect to various outsiders, including researchers, as well as an interest in how they may be represented (or misrepresented) as a group. 29

One provision that can in some measure address issues arising here is the use of a proxy or proxy group. The issues that arise turn primarily on how the proxy stands with respect to individual consent and how much, or in what way, the proxy is representative. The least problematic case is where the proxy is used to supplement the consent of individual members of a group. For example, I may consent to unspecified use of my information for research purposes on condition that a proxy group that represents my values or interests approves the research. In effect, the proxy or proxy group is a delegated authority that allows an extension of my self -determination.

The case is different where the proxy does not supplement my consent but substitutes for it when I have not agreed to this or delegated this authority. My privacy-related interests may be better safeguarded by a proxy than not, but if my consent is not sought, my moral right or claim has been negated. 30

In recognition of such concerns posed by population genetics, Quebec’s Network of Applied Genetic Medicine issued The Statement of Principles on the Ethical Conduct of Human Genetic Research Involving Populations (2003) to supplement their earlier Statement of Principles: Human Genome Research (2000). It is instructive to compare the principles in these two statements that bear on individual rights. Whereas the relevant principles in the earlier statement are quite strong on this count, those in the recent statement are eclipsed by the new principles related to collectives and communities. Since the recent statement is intended to complement, not supercede, the earlier one, this may have little significance as concerns the overall advice to be extracted when they are read together, as presumably they ought to be. Nonetheless, the 2003 statement illustrates how a shift of attention to populations can occur at the expense of, rather than as a complement to, attention to individual rights. In advocating for changes in the ethical framework for population-based research so as to “include respect for collective review and decision making,” Harry et al. (2000, p. 19) are careful to add the very important qualification “while also upholding the traditional model of individual rights.”31

The scale of research biobanking raises other concerns besides those having to do with groups vis-à-vis individuals. Large-scale databases (and the infrastructure to enable or support them) may resemble discrete research projects less than they do institutions, organizations and even businesses.32 Indeed, given some of the budgets anticipated, they may be quite big businesses.

Researchers — and even more so, health professionals involved directly or indirectly in the research enterprise — are cultured into a tradition of professional ethics. This ethics or ethos is aligned with the values of non-maleficence and respect for persons (self-determination) as focussed on the rights or welfare of the individual research subject or population, as well as with public good beneficence.33 The values that shape businesses and large organizations — and those with positions of authority within them — may be quite different, or held with less conviction. Privacy, which is already something of a disvalue for research, may be even less of a priority when organizational or business imperatives have force; claims of public good beneficence are much less credible. To the extent biobank research is also a business or organization, or even partnered with one, the traditional regulatory framework for research may be inadequate. There may be additional need for regulatory norms and processes pertinent to organizations or businesses, such as concern governance, accountability and democratic processes.34

When the biobank is on a national scale, as in Iceland, Estonia and the United Kingdom, the need for additional provisions beyond research ethics — for exa mple, provisions for political debate (and not just passive consultation), data protection commissioners, independent, expert audit of security measures, etc. — is even more obvious. However, 20 smaller biobanks may be no less problematic as concerns privacy (and other norms as well) than one very large one. And the need for scrutiny with respect to accountability, governance and democratic processes may be even greater given that, thus diffused, the phenomenon is less likely to attract attention and generate significant political debate.

Top

4 Privacy: The Potential for Confusion

In this section, I focus on how the term privacy may be used or understood in policy discussion about biobank research, and the potential for confusion when the meanings of privacy I have distinguished are conflated. Fundamentally, the problem has to do with the use of language. Terms of decisive importance for framing, debating and resolving issues mean different things depending on how privacy is conceived or represented. I will il lustrate this with reference to “respect for privacy” (and related to this, “protection of privacy”), “confidentiality,” “loss of privacy,” “violation of privacy” and “justifiable violation of privacy.”

4.1 Respect for Privacy and Protection of Privacy

As previously indicated, from Warren and Brandeis on, self -determination or control has been central in the privacy literature. Thus conceived, privacy has to do with a place of one’s own — literally and metaphorically speaking — and the access or intrusion of others into that place. Having such a place, or even being such a place, is core to our dignity and autonomy, and important for a variety of other reasons. Our ability to control that place — to exercise significant choice over who has access to it or us and under what conditions, to selectively keep out and welcome in, reveal or conceal — is essentially connected to who or what we are as persons. And respect for that place and our ability to control or determine access is central to respect for persons. Privacy is viewed as a matter of right, or, at the very least, as a strong moral claim, elaborated in terms of the capacity to exercise control over, or self-determination with respect to, one’s information. 35 Respect for privacy in this sense is expressed in Principle 3 of the Code of Fair Information Practices (1973): “There must be a way for a person to prevent information about the person that was obtained for one purpose from being used or made available for other purposes without the person’s consent.”

What most essentially distinguishes self-determination privacy from security privacy is its emphasis on choice or consent, as distinct from risks of harm or adverse consequences. When information is accessed without my permission, I can be wronged thereby even if I am not harmed or injured. “To treat a man without respect is not to injure him,” Benn (1971, p. 8) writes, “it is more like insulting him.” Conversely, to treat someone with respect, to respect his or her privacy, is to ask for his or her permission.

Self-determination privacy is deontological — focussed not on adverse consequences, but rather on rights and duties grounded in the category of the person and in respect for persons. It is also liberal, in the classical sense, as it is focussed on individual rights of non-interference, and on rights as distinct from harms.36

Ruebhausen and Brim (1966, p. 426), writing in a technologically much simpler time, succinctly articulated the core idea of self-determination privacy as it relates to research: “The essence of the claim to privacy is the choice of the individual as to what he shall disclose and withhold, and when he shall do so. Accordingly, the essential privacy-respecting ethic for behavioural research must revolve around the concept of consent.” 37

The “privacy-respecting ethic” is quite different when privacy is construed not in terms of selfdetermination but rather of security. Privacy is thus represented primarily in terms of the various interests that individuals (or groups) have in their information in view of the potential for adverse consequences (e.g., embarrassment, inhibition, discrimination, denial of insurance or employment, job loss and so on) to occur as a result of access.

When privacy is thus construed, “respect” for privacy translates as concern for the security (and not self-determination) of the data subject as pertains to his or her data. Indeed, “respect” is not quite the right word here. It would be more accurate to say that non-maleficence is to security privacy as respect for persons is to self-determination privacy. Non-maleficence denotes here the duty to ensure that the data subject is not harmed or adversely affected in consequence of access to his or her data. Accordingly, security privacy construes the protection of privacy in terms of safeguards to protect the data as necessary to protect data subjects from harms. Such safeguards, as exemplified in Principle 5 of the Code of Fair Information Practices (1993) requiring “precautions to prevent misuses of the data,” may include policies that carefully limit access to benign users (as determined by the steward), various technical measures such as encryption and anonymization, confidentiality oaths and oversight bodies to monitor compliance.

4.2 Confidentiality

My focus in this paper is on privacy, but a few comments about confidentiality are in order since these two concepts overlap. I will confine my remarks to illustrating certain respects in which the concepts of privacy and confidentiality are similarly ambiguous in meaning and usage and may similarly give rise to confusion.

Like privacy, confidentiality can be grounded either in self -determination or security. In the case of what I will call “self-determination confidentiality,” confidentiality is a species of respect for persons (i.e., as autonomous, self-determining beings). In the medical context, the duty of confidentiality requires keeping the patient’s information secret. This duty is breached if the physician shares the information without the patient’s authorization, even if he or she does so for justifiable reasons (e.g., to prevent harm to someone else).

In the case of “security confidentiality,” the physician’s duty of confidentiality is grounded in care about the patient’s security (i.e., non-maleficence): protecting the patient’s information to ensure that the patient is not harmed in consequence of others having access to it. If the information is shared with others (e.g., researchers, other health care providers) without the patient’s consent, this s haring may not count as a breach of confidentiality at all, provided that the patient is not thereby put at risk.

Historically, confidentiality has been defined in terms of keeping information secret, which gives greater weight to the self-determination grounding. However, there is a trend toward defining confidentiality rather as keeping the information within a circle of “authorized users.”38 The assurance that information will be kept confidential thus translates as assurance that it will be shared in the circle of authorized users only. The circle is drawn to rule out persons or organizations who may be disposed to use the information to adversely affect the interests of the patient. Confidentiality is breached when information is shared with or accessed by unauthorized users. It is not breached provided that it remains within the circle, even if the patient has not consented to the information sharing, and indeed even if the patient were to object to it.

The question of who does the authorizing is a critical one. From the perspective of selfdetermination, the authorization is vested with the patient, or the physician acting as the agent of the patient with his or her implied or express consent. If the patient has not authorized the information sharing, this is a breach of confidentiality, notwithstanding that some other authority (e.g., as authorized by legislation) has “authorized” the sharing. This breach is obscured — the fact of this breach is not even registered — when confidentiality is defined on a model of authorized sharing.

4.3 Loss of Privacy and Violation of Privacy

If we accept Gavison’s normatively neutral definition of privacy as “non-access” (and it has much to recommend it as a starting point for analysis), loss of privacy occurs whenever personal information is collected or accessed by others, and regardless of the potential for harm, or whether it occurs with consent, or is justifiable. The extent of this loss depends on both the extent and nature of the information thus collected or a ccessed.

Following Gavison, what counts as a loss of privacy is a quite different question than whether a given loss of privacy counts as a violation of privacy, which question is different yet again from whether a given violation of privacy counts as a justifiable violation. Whereas the question of loss privacy is more or less a factual or empirical determination, the question of violation, and justifiable violation, cannot be answered without reference to explicit values or moral norms.

Loss of privacy, even as assessed from the standpoint of the person whose privacy is lost, may be a good or bad thing, depending on a variety of considerations. However, violation of privacy, even if justifiable, negates a value or moral norm. It makes a difference which values or which moral norms are brought to bear in the assessment of violation.

For self-determination privacy, the issue of violation turns primarily on consent.39 A loss of privacy to which I have consented (e.g., I freely disclose information to my physician for the purpose of receiving care, or share a secret with a lover) is not a violation of privacy. However, if the information is subsequently shared, used or accessed in a way not authorized by this consent, or the consent has been obtained by fraud or deceit, my privacy (i.e., a value or norm that pertains to privacy, where privacy is taken in Gavison’s neutral sense of accessibility to others) is violated. Thus conceived, violation is not harm-dependent, as Robertson (1999, p. 64) illustrates with the example of a “Peeping Tom” who “offends privacy even if the person viewed through a bedroom window is unaware of being seen and suffers no other consequential harms from the viewing.” Similarly, Robertson continues, “privacy is invaded when an unauthorized person looks at another’s medical records.”40

The distinction between loss of privacy and violation of privacy is far less clear, if not collapsed altogether, when privacy is construed or represented as security privacy. For security privacy, privacy is preserved to the extent that consequential harms are safeguarded against. Thus even what on Gavison’s neutral account of privacy counts as a loss of privacy: personal information becoming accessible to others, for example, researchers “authorized” to access the information by someone other than the data subject, may not even appear as a loss of privacy, much less a violation of privacy, provided safeguards exist to prevent harm or interest-adverse consequences.

For self-determination privacy, however, research access counts not just as a loss of privacy but also as a violation of privacy to the extent it occurs without consent, or with a pseudo-consent constructed not to ensure respect for persons but to facilitate and ensure access to desired data. Safeguards that protect me from harm in consequence of a loss of privacy, no matter how reliable and trustworthy they may be, do not cancel out the fact of loss when information about me is accessed, or of violation when this access occurs without my consent, or under a pseudo-consent.

It would be a serious confusion to reduce the point here to suspicion about, or lack of confidence, in safeguards. To be sure, there may also be reason to question the adequacy of the safeguards. What about hackers? Is the information really anonymous? What if whomever gains access (and who this “who” is and how or by what authorization they have access to my information in the first place, and with what authorization they subsequently enable or facilitate the access of others yet to these data, are questions of decisive significance) is incompetent or reckless as concerns the protection of my data, or the protection of my interests in my data? Can this “who,” and whatever safeguards are in place, be trusted? After all, almost every day, stories appear in the newspapers about safeguards that have failed or persons who, whether through maliciousness or incompetence, have abused trust that has been vested in them. There are always risks and, in the name of honesty and respect for persons, whatever assurances about safeguards ought not to misrepresent the risks.

But even if we could be certain that safeguards against such risks were 100 percent foolproof, there would remain the question of consent.41 Naser (1997) illustrates this point nicely with reference to trespass:

Even if the possible harms due to the disclosure of genetic information can be avoided by appropriate public policy, there remains the fact that patients expect that their records will remain confidential. Simply eliminating the possibility of harm due to disclosure does not eliminate the wrong or disservice to a patient whose records are open to perusal by so many individuals and agencies. Alexander Capron has elaborated on this distinction by arguing that a person who enters your house while nobody is home and neither takes nor disturbs any of your possessions has not harmed you by removing or damaging any of your property, but has wronged you by both trespassing on your property and by violating your privacy.

Indeed, assurances about safeguards may occlude, obscure and or even obfuscate the point that privacy, in self-determination, is violated and not respected or protected, to the extent information access occurs without the person’s authorization, and regardless of adverse consequences and safeguards against them. “It is not just a matter of fear to be allayed by reassurances,” Benn (1971) argues, “but of a resentment that anyone — even a thoroughly trustworthy official — should be able at will to satisfy any curiosity, without the knowledge let alone the consent of the subject.” There may indeed be reasons for fear, but fear is not the issue!42

Moreover such assurances about safeguards, if offered as if to neutralize privacy concern, are patronizing and, to adapt Benn’s earlier-cited phrase to present purposes, add insult not to injury, but to insult. As Benn, describing the situation of someone concerned about privacy in the selfdetermination sense, puts it: “to treat the collation of personal information about him as if it raised purely technical problems of safeguards against abuse is to disregard his claim to consideration and respect as a person” (p. 12).

Safeguards to ensure that information access occurs only as authorized are beside the privacy point, and obscure or obfuscate the privacy point, when this authorizing is given by someone other than those whose information is accessed, who may not know about, much less consent to, the access. The privacy question remains: authorized by whom? If not by the individual in question or as consistent with prior authorizing statements, the information flow as authorized by whomever, however adequate the safeguards, is a violation of privacy. 43

4.4 Justifiable Violation of Privacy

If there is confusion or talk at cross purposes about what counts as a “respect for privacy,” “protection of privacy,” “breach of confidentiality,” “loss of privacy” or “violation of privacy,” there can be little hope of resolving privacy issues in an explicit, principled way. And there can be little hope of coming to an explicit and principled resolution of whether, and under what conditions, a given violation of privacy is justifiable. For whether a given access counts or registers as a violation of privacy in the first place, or indeed even as a loss, depends on whether privacy is construed in terms of self -determination or security.

In providing a blood sample for clinical testing, I am losing privacy in the neutral sense that information about me becomes accessible to others, even if I give my permission and do so willingly or even enthusiastically. If subsequently this sample is accessed by someone else without my authorization (including a non-malign researcher, under safeguards as adequate as safeguards can be), this counts as a violation of privacy, where privacy is construed in terms of self-determination. However, there may (or may not) be sufficient reason to justify this violation. Obviously what counts as a sufficient reason is key to debates about these matters, which depends in part on how one interprets normative claims about privacy. For example, if privacy is interpreted as a matter of right, the test for sufficient reason to violate privacy will be more stringent than if one conceives it as a different sort of moral claim or interest.

These important questions about justifiable violation of privacy can be raised in an explicit and principled way only if the fact of loss or violation is registered in the first place. If privacy is construed as (or reduced to) security privacy, and respect for privacy to non-maleficence, what would otherwise count as a violation of privacy (or even loss of privacy) may not count as such at all. In consequence, the need to justify or account for this loss or violation may not even arise.

Top

5 Privacy from the Perspective of Research Access

From the perspective of research access, personal information is a resource. Privacy — to the extent it limits this access — is an impediment or obstacle. However, the potential impediments posed by self -determination privacy are a greater challenge from the access perspective than those posed by security privacy. The reason for this is obvious when we consider what it is to respect or protect privacy in either case. Security privacy requires reasonable assurance that persons will not be harmed or adversely affected in consequence of research access to their information. Such safeguard requirements as may be necessary to provide this assurance, although perhaps inconvenient, can ordinarily be met by research and therefore do not threaten access.

Self-determination privacy, by contrast, imposes a consent requirement. Such a requirement may be not only inconvenient, it can effectively block access to data that are desired or needed for research purposes (for example, in the event that consent is impossible or impractical to obtain, or that persons refuse to consent, or that consent introduces a sample selection bias).

Research access comes into view differently as assessed with reference to each of these aspects or meanings of privacy. Importantly, research access that does not appear or register as a violation of privacy, or even as a loss of privacy, may appear as such when assessed in light of self-determination privacy. For example, imagine a data steward who conceive s his or her duty in terms of protecting data subjects from adverse consequences due to others having access to their data. If research can meet the test of non-maleficence (e.g., the researcher signs a confidentiality agreement, appropriate safeguards are in place, etc.), the data steward may permit the researcher within the circle of confidentiality — those “authorized” to have access — without this registering as a loss of privacy, violation of privacy or breach of confidentiality. Since no loss, violation or breach of privacy will have been registered, the difficult issue of justification need not arise at all.44

Given the research interest in access, it is only to be expected that the research access perspective will tend to favour representing privacy in terms of security rather than self-determination. The favoured regime for information privacy will have at its centre not individual data subjects but rather a benign steward, authorizing research access as necessary for research purposes, with or without consent, subject to safeguards.

What I am calling the benign steward perspective describes a values orientation toward personal information and its use for research (as well as other purposes) that represents privacy primarily in terms of security and downplays self-determination privacy. Its focus is not privacy — in either of its aspects — but rather access to personal information, viewed as a resource in the service of benign purposes.45

This perspective has roots in an older values perspective (more precisely, one of two, as I will show) within the field of research ethics. Numerous commentators have noted a values divide in research ethics, distinguishing two main, and sometimes competing, values perspectives. For example, in broad terms, Robertson (1999) and Naser (1997, esp. pp. 63–66) distinguish between deontological and consequentialist perspectives in research ethics.46 MacDonald (2000, see esp. pp. 36–41), in a discussion of the research ethics framework in Canada, contrasts a “rights” perspect ive with a “utility” perspective.

At a lower level of generality, Faden and Beauchamp (1986, p. 185) distinguish two “frames” for research ethics: “One frame features respect for autonomy, as expressed in informed consent. This frame is, in effect, an autonomy model. The other frame emphasizes issues of welfare and harm — in effect, a beneficence model.”47 Hanson (2001, p. 36) likewise argues that “in the context of research ethics a first distinction should be made between a principle of protecting integrity . . . and the well-established principles of beneficence and non-maleficence.” Autonomy translates more or less as the deontological or rights perspective; non-maleficence and beneficence, taken together, translate more or less as what they call the consequentialist perspective or utility perspective.

Before situating the benign steward perspective with reference to these two perspectives or frames, it is instructive to relate this values divide in research ethics to the value difference that underlies the two aspects of privacy I have distinguished. There are important similarities, but an important difference as well. Self-determination figures more or less the same in the first term of each values distinction, as does non-maleficence in the second term. The difference is that, in addition to non-maleficence, the consequentialist or utility perspective also comprehends the value of beneficence, whereas security privacy incorporates non-maleficence only.

Non-maleficence and self -determination, both as concerns privacy and research ethics, are rooted in concern for the individual, whether the data subject or the research subject. Beneficence, on the other hand, concerns the good of some other or others. Whereas non-maleficence and self - determination are for the most part complementary, beneficence is significantly in tension with either of these values, and with self-determination in particular. In at least some instances, beneficence (e.g., the good, for others, to be achieved through research) requires the negation of either self-determination or of non-maleficence as concerns the data subject or the research subject.

The benign steward perspective aligns with, and is rooted in, the consequentialist research ethics frame; that is, with the values of beneficence and non-maleficence (security). The qualifier “benign” signifies that the information of interest is purported to have significant public good value. A benign steward, with this public good in view, controls and authorizes information access in the service of this good. 48 This perspective is not necessarily antagonistic to selfdetermination, any more than self-determination is necessarily antagonistic to beneficence. However, to the extent self-determination matters, its importance is secondary or subordinate.

Privacy, represented either as self-determination privacy or security privacy, does not have a grounding in beneficence. It would be quite meaningless to talk about “beneficence privacy.” The benign steward perspective, with beneficence as its overarching value (i.e., access to personal information for beneficent purposes), is not a privacy perspective. On the contrary, it is potentially in conflict with privacy in either of its meanings as distinguished. However, it is a perspective on privacy — the perspective of access, to be precise — from which perspective privacy represented as security (non-maleficence, at least, is in the same family as beneficence) is less access-threatening than is privacy represented as self-determination.

To be sure, the benign steward perspective assigns value to self-determination (as does the consequentialist research ethics frame). If the end in view can be achieved just as well with meaningful consent, so much the better. However, self -determination is subordinate to access as necessary for benign purposes (as determined by the benign steward). Self-determination — some amount of control vested in the individual data subject — will be given weight to the extent it is compatible with access for beneficent access. To the extent there is a conflict — and conflict is unavoidable — one or the other must give. The benign steward’s presumption is on the side of public good beneficence.

It is not surprising to find, therefore, that champions of research access tend to downplay selfdetermination or control. Not infrequently, the complexity of contemporary information systems is invoked as if to undermine the relevance or plausibility of individual control or consent. Moor (1997, p. 31), for example, while acknowledging the value of control, criticizes Fried’s controlbased view of privacy on the grounds that “in a highly computerized culture this is simply impossible.” He elaborates the impossibility theme in another paper specifically on genetics, where he urges “that it is important to consider all the conditions under which access should be permitted or not permitted when control is not realistically possible” (Moor, 1999, p. 261).

Although Moor adduces impossibility in argument against control on the part of the data subject, he believes that it is nonetheless possible for this control to be exercised by someone else (i.e., a benign steward), to whom he (and not the data subject!) vests the authority to decide “under what conditions access should be permitted or not pe rmitted.” What access “should be permitted or not permitted” is a normative decision, a value judgment. If this judgment is not delegated to the data subject, it may have less to do with “impossibility” than with ensuring that the normative issue is decide d in favour of facilitating access for purposes that Moor (or whomever he would entrust as benign steward) deems compelling.

If Moor does not explicitly malign self-determination in the name of beneficent access (arguing circuitously, instead, on dubious grounds of impossibility), other commentators are not so shy. The frank truth is that consent requirements, much more so than harm-reducing safeguarding requirements, can significantly impede or thwart access for purported beneficence purposes. “The difficulties in relying on this approach [individual consent] are immense,” Etzioni (1999, p. 158) writes, going on to enumerate various ways in which consent may impede health research in complex information systems. Undoubtedly, a requirement for consent does impede research, which is why commentators like Etzioni, for whom public good beneficence is the overarching value, prefer to represent privacy in terms of security.

Lowrance (2002), in his Report for the Nuffield Trust on the secondary use of data in health research, exemplifies the benign steward perspective. 49 Consent comes into view primarily not as an expression of respect for persons or a badge of individual citizenship in a free and democratic society, but rather as an impediment to accessing information to promote the public good. The informational regime he advocates has at its centre not the person — a foundational requirement of consent from whom would constitute a major access impediment — but a benign steward or stewards managing information access for the public good, constrained by various safeguards to protect the interests of persons whose information is thus being shepherded. Rather than recognizing anything like a right to control one’s information, Lowrance supposes that altruism or duty as imputed to the data subject can somehow (he does not offer an argument) justify its nonconsensual collection or use for public good purposes.

At the same time as consent is devalued from the benign steward perspective, security (i.e., nonmaleficence) is emphasized. Privacy is admitted to be of immense importance, but it is security privacy that is intended, not self-determination privacy (as indicated by emphasis on various safeguards, assurances about which may have the effect of displacing the issues of consent that would come to the fore if privacy were represented as self-determination).

Indeed, privacy as self-determination may be eclipsed altogether when privacy is construed in terms of security. Recall that for security privacy, the usage of terms like respect for privacy, loss of privacy, violation of privacy and breach of confidentiality, do not register loss, violation or breach as would occur were privacy represented as self-determination. For example, if it is security privacy one has in view, it possible to make promises of confidentiality that contemplate non-consensually sharing information with others within a circle of non-malign users with benign purposes, without this nonconsensual sharing of information registering as a breach of confidentiality (which it manifestly would were self-determination privacy intended). Moreover, the emphasis on safeguards in this perspective and assurances about them may occlude consent issues, as if the existence of safeguards to ensure that data subjects are not harmed obviates the need to seek consent from them.

Many confusions can (and do!) thus arise when self-determination privacy and security privacy are not distinguished, especially when the overarching interest or value is access (allegedly) for public good beneficence. Examples are abundant in policy discussion of privacy and health research. 50

A speech given to the Canadian Institutes for Health Research by George Radwanski (2003), then Privacy Commissioner of Canada, is instructive along these lines, and not untypical. Observing that “many people in the health sector have argued that we may need to accept certain infringements on our privacy today in return for the benefits we will derive from medical research tomorrow,” Radwanski says, “I don’t think that this is a necessary or appropriate tradeoff.” This is remarkable given that in the very same speech he says:

As for the impracticability of obtaining consent, I accept as a general principal that cost factors and the difficulty of obtaining consent from 100 percent of a target population may make it impracticable to obtain individual consent for many health research studies. I therefore intend to take an expansive and liberal view on the question of impracticability of consent. . . . My approach will be to allow health researchers to peer discreetly over the shoulder of the physician or primary health care provider.

If privacy is construed in terms of self-determination, the discrete research “peek,” however nonmalign it may be, must count as a n “infringement of privacy.” If Radwanski does not see it this way, it must be because he is construing privacy otherwise. Indeed he is. “[W]hen it comes to secondary uses of personal health information for health research projects,” he tells us, “rule No. 1 is Do No Harm.” This is certainly the rule for security privacy, but it is altogether beside the point, and in this case obfuscating, when privacy as self-determination is under consideration.51

One should not presume from the fact that someone professes to be, or is appointed to be, a “privacy advocate” that they have self-determination privacy in view when they visit the question of research access. If the foremost end in view is to facilitate access for the research purpose (and there are many reasons why even privacy advocates and commissioners may be thus inclined), it is expedient to represent privacy in security terms, conveniently forgetting, as it were, selfdetermination privacy (which privacy advocates and commissioners invariably profess). Likewise, one should be aware that due to the same kind of confusions, or even obfuscations, initiatives that are primarily driven by access may nonetheless be labelled as privacy initiatives, and not access initiatives. Clearly it is not under consideration for Radwanski in this speech to the research community, in which self -determination privacy is eclipsed by security privacy — the privacy representation of choice when access for the “benign” purpose of research must be assured beyond debate. 52

These remarks about the benign steward perspective are admittedly sketchy. A fuller exposition with references would be needed to illustrate and evidence the claims I have made. Nonetheless, I believe that what I have said about this perspective is clear enough in concept that the reader can discern it (or something very much like it) in much contemporary policy discussion and pronouncement in which issues concerning research and privacy are framed, debated and resolved.

Moreover, even if the perspective I have sketched did not capture something of a prevailing values orientation among those who champion research access (whose commitments are primarily in the direction of access, and not privacy in either of the aspects I have distinguished) it would nonetheless be of some use as a heuristic for making explicit certain values tensions that arise in connection with privacy and research.

In all of this, my chief point has been that the concept of privacy lends itself to elaboration within two distinct value orientations : self-determination (respect for persons, autonomy) and security (non-maleficence, data protection safeguards). Issues come into view differently depending on which aspect of privacy is represented, since terms like respect for privacy, loss of privacy, violation of privacy and justifiable violation of privacy mean different things in either account. When these two meanings of privacy are not distinguished, confusion is likely to arise. Moreover, when privacy is represented as security, as if privacy were exhausted by, or equal to, security, privacy as self-determination may be occluded or obfuscated, as may be issues of consent that do not come into view from security privacy.

Top

Conclusion: Toward Policy Based on Public Debate

The ability to facilitate the collection, sharing and manipulation of personal information through computerization and genetic technology, coupled with the increased demand for it from research, constitute a significant challenge to privacy. How we collectively assess and address this challenge will be an important indicator of our collective commitment not just to privacy but also to accountability and to a free and democratic society.

Research and biobanking is proceeding at a rapid pace in Canada and elsewhere, interlinked with developments in health information systems, networks and databases. Insufficient scrutiny or public debate has been brought to bear. Indeed, the pace and magnitude of change exceeds our capacity for ethical reflection and social debate.

Many issues arise, including even issues about who should decide which issues should be discussed, and with what input. Freeman and Robbins (1999, p. 317), in a review of issues concerning privacy and health information, write: “The implications for assessment are enormous. Society has modest comprehension of the effects of computer information technologies on health, privacy, medical care, the economy, or the social fabric, including trust in medical providers or democratic processes.” The implications for biobank research are even greater, given that it raises novel issues besides those it inherits pertaining to the health system in general.

In its recent report on genetic databases, the Quebec Commission de l’éthique de la science et de la technologie (2003) echo Freedman and Robbins’s concern about “democratic process” and puts the emphasis where I believe it belongs: on the democratic management of the myriad of issues arising in connection with genetic databases. Noting that the subject “has not yet been debated publicly” in Quebec, the commission calls for “open and authentic” consultations. With respect to population databases in particular, the Commission notes the importance of consulting the public “regarding their inclusion in the database,” including on the issue of “whether it should be set up.” It recommends that “all ‘population’ genetic databases for mapping a population’s genes or conducting research on population genetics first be submitted to an informed public to actively involve them in the decision-making process” (p. 10, Recommendation 2).

The challenge lies in determining what kind of democratic processes should be put in place. I have argued elsewhere (Yeo, 1996) that individual consent can be instructive for modelling such processes. Along these lines, the purpose (or an important purpose) of such a process is to ensure the “consent of the governed.” The quality of the process is largely a function of how well it expresses respect for persons and for communities. Is the information provided about the project sufficiently complete for the community to know what it is getting into? Is information that could be material to the consent omitted or hard to access? Is the information that is provided accurate? Is it misleading? Is there undue inducement? Is the voluntariness of the consent compromised by bundling things needed with other things that would not be agreed to but for their association with things needed? Is the process itself sufficiently independent of those with an interest in a given outcome to ensure its integrity and the integrity of its results or conclusions? The greatest danger is that such processes will be designed not to obtain input that truly expresses considered views — an authentic and autonomous expression of the general will, so to speak — but rather to provide support for a predetermined end. Much concern along these lines has been expressed about the quality to date of democratic processes to scrutinize large-scale biobanks. For example, Dr. James Appleyard, one of the authors of a recent statement on privacy and databases by the World Medical Association, has been very critical of the process the Icelandic government instituted. “For a while, the government was winning the hearts and minds of people,” he says, “They conned the popula tion” (Uehling, 2003). This may or may not be a fair assessment of what happened in Iceland, but it does signal the danger that purportedly consultative and democratic processes can be little more than shams.

The UK Biobank has been criticized along similar lines. In a scathing report, the Select Committee on Science and Technology (U.K. House of Commons, 2003, p. 25, #58) charges: “It is not clear to us that Biobank was peer-reviewed and funded on the same basis as any other grant proposal.” The committee continues: “Our impression is that a scientific case for Biobank has been put together by the funders to support a politically driven project.”

These are serious charges indeed. Because the United Kingdom project contemplates public funding, it is in competition with other demands upon the public purse that may be more worthy. Issues of justice and resource allocation therefore arise, with all the complexity such issues add to a democratic process. In order to assess these justice issues (and others besides), one needs reliable information. However, in competitive situations, there is a temptation to exaggerate the good anticipated as a result of a given project, which can compromise the reliability of information about the project. If, or to the extent that, the provision of information is “politically driven” to secure a given end, whatever processes are in place to review the matter are also compromised.

The issue has to do with more than the reliability of information and potential irregularities in the peer review process. Later, the select committee comments even more sharply about the broader consultation process the biobank instituted:

John Hutton, Minister of State for Health, told the House on 3 July 2002 that “I hope that the [consultation] process followed will help to establish a consensus on the future direction of the project.” We fear that this will not be the case. As our Chairman told the House on 3 July 2002, “we will not be able to establish trust behind closed doors. The discussions about the design of Biobank have to come out of the closet and into the open. We need an open ended, democratic debate about how to conduct this research and about how to make it safe.” It is our impression that the MRC’s consultation for Biobank has been a bolt-on activity to secure widespread support for the project rather than a genuine attempt to build a consensus on the project’s aims and methods. In a project of such sensitivity and importance consultation must be at the heart of the process not at the periphery (p. 27, #65).

These charges may or may not be fair. Regardless, the case can be used to illustrate an important point about the design of democratic processes for projects of this sort. To entrust the design of the process to those with a vested interest in its outcome (which is not to say that there is anything ignoble or sinister about that interest) is to create a situation where the interest in the outcome and the interest in ensuring the integrity of the process may be in conflict. The temptation to engineer the process to achieve the desired end is not insignificant.53 I am not aware of the controls for independence that were put in place in the case of UK Biobank to ensure the integrity of the overall consultative plan. However, at least some of the individual consultations were designed and conducted by external consultants.

There is no obviously right way to structure such processes.54 And such processes would have to be tailored to the scale of the project or phenomena at issue, the stakeholders involved and the nature of the issues. In some cases, consultative processes would be most appropriate; in others, deliberative and decision-making ones.

Appropriate democratic processes for ensuring that privacy issues are adequately addressed and adequate accountability exists would, I believe, engage citizens and communities in public discussion and debate about the issues, and create the conditions under which such discussion and debate could meaningfully occur. A significant hurdle to this is lack of clarity or confusion about the meaning of key terms. To the extent that terms like “privacy,” “loss of privacy,” “violation of privacy” and “protection of privacy” mean different and even opposite things, discussion and debate are at cross-purposes. Important value questions and indeed conflicts are obscured when the access interest is incorporated into a perspective on privacy and what is said about privacy from this viewpoint is confused with a privacy perspective. Issues that should be made explicit, such as the conditions under which privacy may justifiably be violated in a free and democratic society, and whether beneficence is a sufficient ground for doing so, may be obscured by assurances about “privacy protection.”

Care taken not to confuse different meanings of privacy and explicitness about the values in play and in tension would help ensure that privacy issued posed by biobank research are adequately framed, debated and resolved in public debate and public policy.

Top

Appendix: Privacy Issues and Applications

Each of the topics discussed in this appendix explores in greater depth and detail particular aspects of privacy in relation to the establishment and operation of biobanks than it is possible to do in a general overview of the topic. The following issues are addressed here:

  • anonymous, key-coded and de-identified personal information
  • consent and safeguards
  • forms of consent and exceptions to consent
  • biobank research, privacy and accountability.

A Anonymous, Key-coded and De-identified Information

Commentators agree that anonymity is an important aspect of privacy. Gavison (1984, p. 347) glosses this aspect as “the extent to which we are the subject of others’ attention.” In this sense, I can be anonymous even in a crowded public square, for example, to the extent that no one is paying attention to me.

I can also become anonymous by masking myself such that, even if others are attending, they are not attending to “me.” I can adopt a pseudonym, and can even invite others to attend to what I write under the pseudonym, without thereby drawing their attention to me as the person who stands behind the pseudonym. I can also be anonymous by removing or excluding information that would enable others to identify me from the information, e ven if the information that is available for them to attend to is, unknown to them, about me.

Many definitional and conceptual issues arise with respect to anonymity, many of which are further complicated by computerization and information technologies. For present purposes, however, I am interested in norms that bear on information about me that is or has been rendered anonymous in the sense that I cannot be identified from the information in question alone (or that bear on the fact or process of anonymization).

To the extent that I cannot be identified, various privacy-related interests that could be adversely affected if the information were linked to me are protected. 55 Anonymity is therefore an important element of any privacy framework for research and biobanking. However, whether or to what extent anonymity obviates the need for consent, as is frequently assumed or asserted but seldom carefully argued, is a different question.

In the first place, there may be reason to be concerned that, assurances about anonymity notwithstanding, I may nonetheless be identified. Provisions for anonymization in the Icelandic Health Sector Database, and the accuracy of claims made about the anonymity of the data, have come under strong challenge.56 The issues are conceptual and not just technical. For example, to describe a key-coded system as anonymous is to equivocate about the meaning of the term. In key-coded systems, I remain identifiable at least to whoever has (or has access to) the key that links the anonymous information to me. There may or may not be reason to worry about the key. I may have nothing to fear. Regardless, to the extent there is a key that can link my identity to the information, I am identifiable.

Even if no such key exists and the link has been completely severed, I may be more or less readily identifiable depending on the standard according to which the information has been anonymized and what identifying information exists in other databases to which the supposedly anonymous information may be linked. Standards for anonymization, and even definitions, are by no means settled. In some cases, deidentification means nothing more than removing a name and address. Sweeney (1998, 2001) has made a career out studying database vulnerabilities and demonstrating how information that is believed or purported to be anonymous can be reidentified, in many cases with very little effort or sophistication. Moreover, DNA specimens can never be anonymized perfectly because identification can be achieved by matching the anonymous sample with some other sample linked to me, as occurs in forensic investigations.

Even if safeguards were adequate to ensure that information was perfectly anonymous, (and there is reason to doubt such assurances as may be given) other concerns remain. I may be concerned that my information, without identifying me, may rebound in adverse consequences for me or my group. I may have reasons to object to the use of my information in biobank research that have nothing to do with adverse consequences or harm in any straightforward sense (for me or for my group). A particular research project, or even research program, may be inconsistent with my beliefs and values. It may clash with my “cultural and ethical principles,” as Harry et al. (2000, p. 20) purport obtains for “many indigenous people.” I may hold social or political views at variance with an emerging political economy of research involving commercialization, patenting and private/public partnerships. Even if I am foolish in my beliefs or misguided in my values, the fact remains that I have an interest.

However, even if all my concerns can be addressed perfectly and I have no reason at all to object to the research, there remains an issue of fundamental importance from the self-determination perspective, namely, whether the use of my information for this research (in whatever way my identity may be laundered by subsequent processing) occurs without my permission. In the alternative, if it is accepted that I have no moral rights or claims over my information once it has been severed from me, my rights or moral claims in it before it has been anonymized extend to whether it is thus processed and used. If privacy is defined in terms of a right to control one’s information, the anonymization and subsequent use of one’s information without permission violates that right, however serious we might judge that violation to be.

Weir (2000) points out that federal regulations in the U.S. (and in Canada as well) permit the practice of anonymizing samples without the consent of the individual source. He comments:

[T]his practice is problematic, sometimes disingenuous, and occasionally deceptive when, for example, clinician investigators obtain a tissue sample for diagnostic purposes, know that they plan later to anonymize the sample for research purposes, do not convey that information to the source of the sample, and subsequently remove the identifiers without consent. (p. F-8)

If at the point of collection with consent a biobank plans to anonymize my information and use it without my consent (or even knowledge), this may very well be material to the consent and its omission a deficiency. If the omission is deliberate it is also, as Weir says, “deceptive.”

Top

B Consent and Safeguards

A comprehensive regulatory framework for privacy and biobank research will have provisions that address both self-determination privacy (i.e., provisions concerning consent) and security privacy (i.e., safeguards). However, much depends on how we construe the relationship between consent and safeguards.

B.1 Consent Is Not Sufficient

Although self-determination privacy gives prominence to autonomy, this is not to say that it obviates the need for safeguards, which address that aspect of privacy having to do with our interest in not being harmed in consequence of access to our information. Indeed, safeguards not only complement consent; in research ethics and fair information principles they are required as a condition of permitting the subject to consent at all. Above some threshold of minimal harm, participation in research is not an option, even if the subject were willing to consent to the research.

The technological and organizational context for information flow in contemporary society is very complex. In the face of this complexity, many privacy advocates question the sufficiency of consent as a gatekeeper. One problem has to do with the ability of people to understand what it is that they are getting into. For example, the gene donor consent form for the Estonian Gene Project (2001) reads: “The Estonian Genome Project Foundation has the right to receive information about my state of health from other databases.” Would a typical Estonian know what those databases are? Indeed, there may be few people in Estonia who would be able to list them. And even if they were listed, would that enable people to understand what they are getting into?

The problem is not just one of complexity. Regan (2003, p. 25) argues that “individual behaviour and organizational behaviour are skewed in a privacy invasive direction.” Given this inertia, consent may not be sufficient for protecting privacy. Regan elaborates: “People are less likely to make choices to protect their privacy unless these choices are relatively easy, obvious and low cost. If a privacy protection choice entails additional steps, most rational people will not take those steps.”

Regan is probably correct in her assessment about the steps people will take to protect their privacy. However, this does not prove that “individual behaviour” is “skewed in a privacy invasive” direction as much as it proves that “organizational behaviour” is thus skewed. If the information system is designed such that individuals can protect their privacy only by a significant exertion, the problem lies not with the unwillingness or inability of people to make this exertion but rather with the design of the system. The reason marketers prefer opt-out systems, in which the default is privacy-invasive and exertion is required to counter this inertia, is precisely because they rely on, if not exploit, the feature of individual psychology that Regan mentions.

To the extent the default for information flow is “skewed in a privacy invasive direction,” consent is indeed insufficient for protecting privacy. However, from a privacy perspective the problem has less to do with the sufficiency of consent than with the design of the system. In an opt-in system, for example, where the default is privacy-protective, consent may very well be sufficient.

Information is increasingly a valuable commodity or resource. In the commercial sector there are strong fiscal incentives to design information systems so as to ensure access to desired information. However, there is incentive to skew the design of information systems in a privacyinvasive direction whenever access to information is needed, for whatever reason. Large -scale biobanks for population genetics require large sample sizes and set very ambitious recruitment targets (e.g., 500 000 for the UK Biobank, and as many as 1 000 000 for the Estonian one), not to mention their needs for access to information in other databanks. The greater the information requirement of the protocol, the greater the incentive to skew system design in a “privacy invasive direction.” In part, this accounts for the selection of the opt -out model in Iceland.

In some measure, careful attention to consent requirements can ameliorate these concerns. With respect to the quality of the consent, for example, provisions could exist to ensure that the consent is informed as necessary, or that dubious models of consent that effectively bypass autonomy to secure access to the information are ruled out. However, even assuming that consent is as proper and properly respectful of autonomy as it can be, there may yet be reason from the standpoint of non-maleficence to limit what people are and are not permitted to consent to, and to rule out research that is privacy-invasive beyond a certain threshold or ensure that it is brought within that threshold. 57

B.2 Safeguards Are Not Sufficent

If, or to the extent that, consent is insufficient, one cannot conclude from this that it is not necessary. And if, or to the extent that, safeguards are necessary, one cannot conclude from this that they are by themselves suffic ient, or that consent is therefore not necessary. Regan is careful not draw such conclusions from her analysis of the insufficiency of consent; other are less careful. “The most effective treatments to shore up medical privacy,” Etzioni (1997, p. 182) claims, “cannot rely on the legal fiction of informed consents by millions of patients for every use of every piece of information about them.” It is a caricature of consent (and an insult to the principles upon which it is based) to suppose that it requires a separate consent “for every use of every piece of information.” For purposes of treating patients, for example, there is much that can be reasonably implied or comprehended under consents that are sufficiently broad to enable access as necessary for this purpose. However, the situation is quite different for secondary purposes, for which consent cannot be implied. In this case, broadness of consent is more of a problem as relates to knowing what one is getting into. However, even for secondary purposes, including research, consent need not be so specific as to require a separate consent for “every use of every piece of information.”

Notwithstanding the caricature, Etzioni has a point about the insufficiency of consent. The problem is that he jumps from the insufficiency of consent to its dispensability. He sees the safeguards he proposes, namely, “new privacy-enhancing technologies” (e.g., audit trails) and institutional arrangements (e.g., changing reimbursement systems)” (p. 182), as being alternatives rather than complements to consent. The reason he sees things this way is that another value is in play in his assessment of the “most effective treatments to shore up medical privacy,” namely, beneficence. His objection to consent has less to do with shoring up medical privacy (in which case it could be complemented by the other safeguarding measures he mentions) than it does with enabling access for purportedly public good purposes.

By whatever means information about me has come under the custody of a ste ward, it is good that my myriad interests in this information (and the interests of others implicated in this information) be safeguarded. However, if such safeguarding is a necessary condition in any privacy framework, this does not make it a sufficient condition.

I have argued that assurances about safeguards are, in an important sense, beside the privacy point as concerns consent. Even if there were in place safeguards that perfectly protected all my privacy-related interests (which is hard to imagine), an issue of moral right or claim exists concerning the collection, use and disclosure of my information. That moral right or claim is negated if my information is used without my consent, or under a pseudo-consent, regardless of how well my interests may be protected.

Assurances about safeguards may thus obscure the crucially important issue of whether the sharing of someone’s information without his or her consent — a violation of privacy on the selfdetermination view — is justifiable, regardless of the safeguards that exist.

If this question is eclipsed, the reason may be that it has already been answered or that an answer to it has been assumed! The benign steward, after all, controls information flow in the service of the public good and believes that this good trumps the data subject’s right or moral claim to exercise control over his or her information. Or so it is assumed. However, in matters of such fundamental importance, and certainly in matters where rights are involved, it is not enough to assert the importance of a given purpose for the public good as if that settled the issue. The case must be argued.

To be sure, the assurance of adequate safeguards may have some bearing on what form consent should take. For example, one can argue that the consent standard should vary according to the degree of potential harm or the degree of identifiability of the information. And such arguments for collection, use or access without consent will certainly be strengthened by assurances about safeguards. However, the argument here can get off the ground only if privacy conceived in terms of self-determination is given its due in the first place and distinguished from the safeguarding of data

Consent and protection of human subjects, respect for persons and non-maleficence, and rights and welfare have been married in research ethics since its inception. Research should be viewed with respect to both screens. In cases where the potential harms exceed a certain threshold, research may be impermissible, even with the person’s consent. However, this is a quite different from saying that, because consent is not adequate for safeguarding the subject’s interests, the need for consent is cancelled by safeguards that do protect those interests. What informs such deemphasis of consent, I believe, is not the privacy-safeguarding imperative (grounded in nonmaleficence), but an imperative of an entirely different sort that is not a privacy imperative at all, namely, the imperative of access to personal information for public good purposes believed to be of overriding importance.

Top

C Forms of Consent and Exceptions to Consent

To the extent that consent functions as a gatekeeper, consent requirements impede research. From the standpoint of promoting research, therefore, there is incentive to bypass this gate (for example, with exemptions, waivers or exceptions) or, alternatively, to ensure that the consent that opens this gate is constructed as permissively as necessary to enable access to personal information for research purposes.

Issues pertaining to consent can be divided along the lines of two main crucial questions. What constitutes appropriate consent? Under what circumstances is it justifiable to collect, use, share or disclose information about someone (or some group) without their consent? These questions are explored in turn below.

C.1 Forms of Consent and Pseudo-consent

From the research perspective, it is desirable that consent (if required) be as permissive as possible in order to ensure and enable access. Permissiveness is not, by itself, inconsistent with consent, as consent includes giving permission as well as not giving it. However, the interest in access creates a temptation to gerrymander the consent requirement to accommodate this interest (as frequently occurs in the commerce), rather than accommodating this interest to a bona fide consent requirement. It is therefore important to understand the meaning of consent and what reasons and values underlie it in order to determine whether a given consent form or process is adequate.

In their important work on the theory of informed consent, Faden and Beauchamp (1986) distinguish two senses of consent: “autonomous authorization” and “effective” or “institutional consent.” Autonomous authorization, which is consent proper on their account, is grounded in respect for persons, self-determination and autonomy. Effective or institutional consent, on the other hand, is a bureaucratic or institutional requirement that specifies what can pass for consent (or be passed off as consent) in organizational contexts or for organizational purposes. What matters is that the consent be “obtained through procedures that satisfy the rules and requirements defining a specific institutional practice” (p. 280). These procedures may have less to do with respect for persons or autonomy than with the needs of the organization or institution.

The key components of consent proper are knowledge relevant to what one is authorizing and voluntariness as concerns this authorization. As Freedman (1975) puts it, the thing is to “know what it is that one is getting into” sufficiently to make a responsible choice, and to be free to choose to get into it or not. The consent process is shaped to express respect for the person whose consent is sought, and not engineered to produce a particular outcome (e.g., to enable the purpose at stake). To the extent one is coerced, manipulated or deceived, the consent is a pseudo-consent.

In what follows, I discuss a variety of consents and pseudo-consents that may be advanced for biobank research, organizing this discussion with reference to three requirements relevant to consent: knowledge and disclosure, voluntariness, and evidence of authorization.

Knowledge Requirements Informed, Broad, Specific and Deceptive Consent

One must know enough about what it is that one is getting into to make a responsible choice. One simply cannot consent to something about which one knows nothing. If in a culinary context you were to approach me with a spoon and ask me to close my eyes and open my mouth, I would at least know that you were asking me to taste something, even if I did not know what it was. But exactly what must I know or be told in order for the consent to be proper (i.e., to say that I have autonomously authorized you to bring the spoon to my mouth)? The answer to this question is profoundly dependent on context. In an encounter in the physician’s office, the answer may be quite different from an encounter in the kitchen with my partner. In the physician’s office I may need to know what is in the spoon, or at the very least what properties it has that are relevant to the reason for my visit.

The doctrine of informed consent has evolved and undergone various refinements in the health care context. Its standard of disclosure, as elaborated in law and expressed in various policy statements, is quite exacting, and more so for research than for health care. It can be interpreted as requiring disclosure of the nature of the research and of any risks material to my participation in it.

A requirement to meet this standard of disclosure would certainly impede biobank research. 58 If one could only meaningfully consent to research that was precisely specified at the point of collection, obviously consent could never be construed to authorize future research that had not been thus specified. Future research using the information, now unspecified, could occur only if a new consent for the new purpose were obtained, or if an exception to the rule of consent existed under which the research qualified. Otherwise, the research could not legitimately occur.

Knowing this, as a matter of public policy either we accept now to forgo unspecified future research for which it would be impracticable to obtain consent or we provide or allow for it to occur. If we provide or allow for it to occur, this can only be in virtue of either an exception to the rule of consent or an acceptance of consents now that are sufficiently broad to authorize unspecified research in the future.

To gerrymander consent to achieve a given end would be to subvert the values upon which it is based. If respect for persons were our guide, it would be difficult to choose between no consent at all and a consent that was in effect a pseudo-consent. However, it is not at all cle ar that proper consent requires a standard of disclosure that rules out unspecified future research. 59 There is nothing plainly inconsistent with respect for persons or autonomy in my saying: “I consent to the use of my information for research purposes now and into the future.” 60 If I have been provided well-chosen examples that indicate the range of foreseeable possible research broken down into generic kinds that could reasonably be expected to make a material difference, it would seem that I can sufficiently know what I am getting into. The opportunity to select from among materially different options, particularly on an “opt-in” rather than an “opt-out” basis, would further promote choice in the matter.61 A proxy that I trusted to review future research with respect to its appropriateness could also enhance the attractiveness of the broad consent by carrying my will in the future.

Issues pertaining to broad consent are therefore certainly open to debate. However, if broad consent to unspecified research purposes is, or can be, consistent with autonomous authorization, this does give general license to broad consent in general. If a case can be made for broad consent to future research, this is largely because specified and unspecified research can be bridged by means of the notion of “consistent purposes.” The situation is quite different in the case of heterogeneous purposes. 62 If consent were so broad as to incorporate quite disparate purposes, one could not know what one was getting into in giving such consent.

In the case of biobank research, broad consent to unspecified purposes may be less reason for concern than the provision of information that is inaccurate or misleading, or the omission of information material to the consent. What information to provide (or not) about risks, for example, and how to describe such risks as it is deemed important to mention, are more complicated questions than in other research endeavours. What are the risks of having our information in a research biobank? What assurances can confidently be given about safeguards? Should mention be made of the risk that data will be inappropriately accessed by persons inside or outside the biobank? What degree of disclosure is required as concerns the financial circumstances of the researc h and biobank? These are questions about which reasonable people can disagree.

An assertion that information is anonymous when in fact it is key-coded is inaccurate. An assurance that “the police, the prosecutor’s office, the court, the health insurance fund or insurance companies cannot get any data from the Gene Bank” (Estonian Gene Project, 2003) may be accurate, but misleading given the ambiguity of “cannot” (i.e., impossible for them to get versus not permitted, but possibly permitted, under circumstances that may not be improbable or even predictable). The solicitation of consent may also be misleading if prospective research subjects are led to conflate clinical care and research, or the physician and the investigator, or patient and research subject. The potential for commercialization of samples gathered for research would arguably be material to the consent that authorized the collection, and its omission therefore a law.

These issues shade into the murky area of deception. For something to be deceptive, intent is essential. If a statement known to be inaccurate or misleading were made for the purpose of securing someone’s consent, that consent would be obtained by deceit. If an item of information were deliberately omitted or withheld because it was anticipated that its mention could in fact be material to the consent and dispose persons not to consent, that consent or consent process would be deceptive. Plans to anonymize samples taken for clinical care and make them available for research are arguably material to the initial consent. Omitting to disclose these plans at the point of consent may very well amount to deception. However, because deception is a matter of intention, and intentions are not always evident, one cannot conclude from the fact (which may be disputed) of inaccuracy, misleadingness, or significant omission that deception has occurred.

Voluntariness Requirements: Mandated, Coerced, Induced and Tied Consent

The voluntariness of consent may be compromised in any number of ways. For example, in the case of mandated collection, I am required by law or under threat of penalty to consent to something (e.g., give a blood sample) that I might otherwise not consent to but for the threat or penalty. My consent is indeed voluntary (I could say no), but the options are clearly framed to secure a yes. The choice situation is coercive, even if within that situation my consent is voluntary.

Tied or bundled consents pose a different problem and raise difficult conceptual issues about coercion, enticement or inducement. Consent is tied when X is connected to Y such that I cannot (i.e., am not permitted to) get Y unless I consent to X. X may be something bundled together with consent to Y, such that in consenting to Y I am taken to have consented t o X, or I may be required to give a separate consent to X as a condition of receiving Y. Where Y is something that I need — health care for example — and cannot obtain otherwise,63 or something to which I am entitled as a matter of right, my consent to X ma y be problematic to the extent that I am enticed into this consent (some would say coerced) by my need for Y. It is difficult to reconcile consent with respect for persons or autonomy when disparate items are bundled to secure my consent to something to which I would not consent but for my need to obtain something else in the bundle,

What makes something an inducement, and at what point an inducement becomes an “undue” inducement, or even coercive, raise very difficult questions about voluntariness and its meaning in a theory of consent. 64 Related issues arise in many different research contexts. An emerging practice that is somewhat novel for biobank research is the concept of benefit sharing. The idea that individuals or communities (the arguments are different for each) should receive some share of whatever benefits accrue from the research use of their samples may appear attractive for any number of reasons, including reasons of justice and fairness. But at what point does the expectation of benefit become an inducement, and at what point does something that may be understood and accepted to be an inducement become an undue inducement? If, but for the promise or expectation of benefit, individuals or communities would not consent to participate in research, is their consent therefore compromised as per its voluntariness?

In commercial contexts, what effectively amounts to the purchase of consent for information sharing is a common practice. Indeed, many people quite enthusiastically give their business cards or fill out information sheets in exchange for the opportunity to win a prize, or agree to have their shopping history recorded in exchange for air miles. The vulnerability of privacy to such inducements is one of the reasons some privacy advocates worry about the adequacy of consent as a gatekeeper for information.

Consent to research is a different matter, and the standards that have evolved for research are more rigorous than in commerce. Practices that pass a consent test in commerce may not pass a consent test in research ethics. Moreover, consent is not the only gatekeeper in research ethics, and even practices that may be acceptable when assessed from the standpoint of self-determination may not be so when assessed from the standpoint of non-maleficence or of other safeguarding considerations that may be brought to bear.

Authorization Requirements: Express, Implied and Presumed Consent

If you claim to be acting on the basis of my consent, this means that I have authorized whatever it is that you are doing, the doing of which may be justified (or only justified) by this authorization. However, what evidence is reasonable and necessary for you to make such a claim, whether as concerns exactly what it is that I have (purportedly) authorized or the ve ry fact of my authorizing?

I can express my consent in a variety of ways: verbally, in writing, putting a check mark in a box or even by a signal. In the abstract, any of these can equally make the fact of authorization plainly and unequivocally evident. As a rule, the more explicit and unequivocal my expression, the more certain your claim to having my consent will be. To the extent you rely upon assumptions, presumptions or inferences (which may be more or less dubious) to support your claim to have my consent, your evidentiary claim becomes more problematic.

Implied consent involves inferring my consent to X based on my actions or inactions. Those seeking access to data can be very creative in the way they interpret this doctrine, and liberal in their rules of inference. The “imply” imputed to the subject in implied consent is a more important verb than the “inferring” of the one who concludes that the subject has indeed “implied.” The subject “implies” consent (or not), the other “infers” it. The test for implied consent is whether there is reason or evidence to believe that the person did indeed consent. It is reasonable for my physician to infer my consent to use my information for the purpose for which I presented from the fact that I presented and shared my information with him or her in this context. An inference that I have thereby implied my consent to the subsequent use or sharing of my information for research purposes would not be reasonable because there is no evidence that I did.

However, in fact such inferences can be made not with an eye to what I have or have not implied as such but rather to facilitate some purpose for which my consent is needed in anticipation that, if asked, I might decline. To the extent this is so, the inference of consent has nothing to do respect for persons or autonomy.

The concept of implied consent is sometimes dubiously evoked in connection with so-called “opt out” provisions commonly used in marketing. The default option is that certain information access, use or disclosure will occur unless persons indicate that they do not wish this to occur. If they fail to do so, their consent is implied or, as this doctrine has been shaped in Iceland, presumed.

To the extent this doctrine relies upon an inference (and it can be debated whether presumed consent does), the plausibility of this inference depends upon the person’s being aware of the default, their ability to say no to this default and their capacity to indicate their refusal with relative ease. To the extent persons do not know about the default or their ability to say no to it, or if they must incur some cost or expend some effort to say no, an inference or presumption of consent is implausible, if not disingenuous. Regardless, to the extent the reliance on implied or presumed consent in place of express or “opt-in” consent is in anticipation that, if asked, persons may not give the desired answer, the point is obviously not to express respect for persons but to make an end run around it.

C.2 Consent Exceptions

Whether viewed as a value, principle, moral claim or right — and regardless of whether conceived in terms of self -determination or security — privacy is not an absolute. In some cases, it is justifiable to violate privacy — to access or use someone’s information without his or her permission, and even for purposes adversely related to his or her interests. The issue of justification turns on the assessment of the privacy value as adjudicated in light of some other value that may be more compelling.

In liberal democratic society, there is a strong presumption in favour of privacy, autonomy and consent. The burden of proof falls on the exception. No one disputes that there are exceptions, but what those exceptions are and according to what principles and processes they ought to be decided is a matter of controversy.

In biobank research, the issue arises primarily in connection with secondary or indirect collection of samples or information, particularly where data from a variety of sources are being linked, as in large-scale population biobanks, and when at least one of the data elements is collected without the consent (or knowledge) of the individual. There exists a great deal of biobank information of interest to researchers — held by a variety of biobanks and data custodians — that was not initially collected for research purposes or with consent to such use. If research access to this information were subject to a consent requirement, certain research would be impeded or even rendered impossible. Is the research purpose sufficiently compelling to justify the collection, use or sharing of personal information without consent? What principle would justify this exception?

Careful scrutiny of exceptions to consent has been a hallmark of liberal democratic society. In the tradition of the health care professions, this vigilance has likewise been brought to bear on confidentiality and its exceptions.65 The key principle of relevance for limiting or legitimating nonconsensual access has been a variant of J. S. Mill’s harm principle. This test can be construed, approximately, as follows: exceptions to consent or breaches of confidentiality can occur only if it can be reasonably demonstrated that a foreseeable, imminent and serious harm to a third party can be prevented by allowing access to my otherwise protected information. How probable, serious, imminent and preventable this harm must be has been the subject of lively debate.

Traditionally, some variant of this test has shaped such doctrines as mandatory reporting and the duty to warn. However, the information needs of population health research, biobank research and health services research are virtually limitless, and increasingly exceed what can be strictly justified by the principle of harm prevention. From a public health perspective, it is desirable to have surveillance information not only about communicable diseases but a variety of disease and health conditions. In consequence, pressure to broaden the warrant for consent exceptions has been growing. Lowrance (2002, p. 70), for example, argues “that public health mandates and protections deserve to be clarified, strengthened, and extended for a variety of surveillance, registration, clinical audit, health services research, and other activities.”

The important question concerns what alternative principle, if not the harm principle, could be evoked to justify broadening non-consensual collection? Lowrance (2002) does not explicitly pose this question, which is not unusual among those arguing or even lobbying for nonconsensual access to health information. This may be because it is assumed to be self -evident, and not in need of principled argument, that the research purpose should prevail when “it is impracticable to obtain consent.” 66 Under this assumption, the mere demonstration that it is impracticable to obtain consent would be sufficient to enable or justify research access without consent.

This assumption appears to underlie the Canadian Institutes for Health Research’s (2001) recommended regulations under the Personal Information Protection and Electronic Documents Act, 2000. The regulations have as their aim to ensure “that the act is interpreted and applied in a manner which achieves the objectives of the act, without obstructing vitally important research. . . .” (CIHR, p. 1).67 The act has an “impracticability” condition that research must meet if it is to occur without consent. If research meets this condition (which is quite access permissive), it can occur without consent. What is missing from the CIHR document and in its elaboration of this impracticability test (see Recommendation 5) is a principle or test by means of which to justify why, or even if, research should prevail over the consent requirement, or indeed even an acknowledgment that such justification is necessary or important!68

This unremarked and unjustified assumption about the priority of research over rights is in stark contrast to the strong and explicit statement of priorities in Article 10 of UNESCO’s (1997) Universal Declaration on the Human Genome and Human Rights:

No research or research applications concerning the human genome, in particular in the fields of biology, genetics and medicine, should prevail over respect for the human rights, fundamental freedoms and human dignity of individuals or, where applicable, of groups of people.

The demonstration of such respect would at the very least require a principled justification for abridging those rights.69 Thus Article 9 pronounces: “In order to protect human rights and fundamental freedoms, limitations to the principles of consent and confidentiality may only be prescribed by law, for compelling reasons within the bounds of public international law and the international law of human rights.”

In fact, research exceptions to the prin ciple of consent that cannot be justified by the principle of harm prevention do occur, and without making fully explicit for scrutiny and debate an alternative principle that could justify such exceptions. Indeed, it may be that unscrutinized exceptions have occurred to such an extent as to acquire a de facto legitimacy (for example, as concerns research and pathology specimens or health information in administrative databases).

If we take privacy seriously, and take seriously the belief that we are a fre e and democratic society, nonconsensual research must be grounded in a principle explicitly stated and subject to scrutiny in recognition of the importance of what is thus negated. It is not at all evident what alternative principle could justify the violation of privacy or the abridgement of rights. One candidate would be the principle of beneficence or utility, as may be implicit in oft-cited appeals to the “public good” or “public interest.” Argument along these lines would require a carefully constructed definition of the public good because virtually everyone who petitions for access to personal health information can justify their claim on some interpretation of the public good. A vague appeal to the public good is not enough.

However, even assuming that the public good importance were demonstrated the evocation of beneficence or utility as against rights is highly problematic. As Ericson (2001, p. 46) points out, rights as traditionally understood may not be “lightly overridden by reference to another benefit.” He elaborates that a “cure for cancer would, it is true, be of inestimable benefit, but the researching of a cure is nobody’s right.” A consent exception to access A’s information in the case where A threatens harm to B is solidly grounded in liberal theory and in the tradition of medical ethics; an exception premised on benefiting B cannot be so grounded. Acceptance of the principle that it is justifiable to infringe the rights of (potential) research subjects in order to benefit others would ha ve radical and far-reaching consequences for research and research ethics, and indeed for our claim to being a free and democratic society.

An alternative approach would be to evoke the principle of justice for such justification. In countries with medicare, such as Canada and the United Kingdom, one might argue as follows: the use of one’s information for research purposes is an imputed duty owed in exchange for the entitlement to receive publicly funded health care. To get the benefit, you must pay the privacy price. This argument would go differently depending on whether, the deal having thus been made explicit, citizens are given the choice to reject the deal altogether, for example, by opting out of the publicly funded system. However the deal was specified, the trade-off made explicit would undermine support for and perhaps even the moral legitimacy of the publicly funded system. Moreover the imputation of what would be in effect a duty to participate in research would mark a significant precedent in health care, research and research ethics, particularly if citizens had no choice in the matter of whether to accept the deal or not.

The issue of justification is of fundamental importance. It bears not only on privacy in a narrow sense but also upon the kind of society we are and how we adjudicate issues of rights and freedoms. It may be that a nuanced version of the beneficence argument or the justice argument — or yet some other argument not adduced above — could successfully justify nonconsensual research beyond what the principle of harm prevention sanctions. It may be that the respect for persons and for privacy appropriate in a free and democratic society can be reconciled with consent exceptions for research purposes and such abridgement of rights or violation of privacy as this may entail.

However, we can be sure that it is impossible to reconcile such respect with the failure to provide explicit justification at all, or with vague justification invoking the public good as if the mere fact of the invocation were enough to settle the issue. The failure to provide such justification, or laxity in regard to it, may say more about the regard in which privacy (and democracy) is held, and be even greater reason for concern, than the abridgement of the right or violation of privacy itself.

Top

D Biobank Research, Privacy and Accountability

Although I have not made a detailed assessment of the existing regulatory framework for privacy and biobank research, I find reason to think that the existing framework is inadequate in some measure at least. In particular, I believe that consent issues are not adequately resolved in the existing regulatory framework. However, my focus here is on accountability, which I take as applying to consent issues, and indeed to all other issues, to the extent that the question whether these issues have been framed, debated and resolved in a transparent, explicit and public way is a question of accountability.

Rolleston’s report (2002) on research and genebanks is a good start point for bringing the issues into view. The following two conclusions are especially noteworthy:

The privacy issues raised by genetics do not differ significantly enough from those of other aspects of research involving human subjects to warrant a separate regulatory regime; and
Controls that are already in place for governance of the standards of clinical care are sufficient for governance of issues related to genetic privacy. (p. ii)

Whether existing controls for “governance of the standards of clinical care” are “sufficient for governance of issues related to genetic privacy” is of course a value judgment. It is unclear from what value standpoint Rolleston makes this judgment. However, to the extent that this “governance” allows the migration of samples collected from clinical care to the research context without consent, its adequacy assessed in light of self-determination is questionable. And to the extent that whatever provisions (or lack of provisions) may allow for this have not been determined in an explicit, public way, this governance is questionable from the standpoint of democracy and accountability.

Rolleston does acknowledge in some measure that there is a problem. He notes that few of those he interviewed “are satisfied that the balances now being achieved are in the best interests of the patients involved, or of Canadians as a whole” (p. 22). However, it appears that the problem of “balance” as he and his informants construe it is not that privacy is not protected enough but rather that research is inhibited too much! He notes, for example, that “researchers and clinicians express frustration about the effects on the performance of research of uncertainties of the ethical framework for research in this area, and the variability within and between REBs in handling research protocols involving DNA banks” (p. iii).70 However, “uncertainties of the ethical framework for research” cannot be the problem per se, as from context it seems most unlikely that a more certain framework that was more certainly more privacy-protective and less researchinhibitive would solve the problem.

If Rolleston believes that such problems as “uncertainty” and “balance” do not warrant a “separate regulatory regime” for research and biobanking, this appears to be because he thinks they can be resolved without modifying the existing ethical framework. His preferred solution is “best practices,” which are “needed if Canadians are to achieve optimal balances” (p. 22).

National best practices are needed for the standards and operational procedures for research ethics in this area, including considerations such as consent, ownership, custodianship, security, coding, transfer of samples between laboratories. This work should involve patients and aboriginal peoples, as well as researchers, clinicians, ethicists, lawyers, etc. CIHR should lead this work. (p. 26–27)

Best practices must surely be a part of any comprehensive privacy framework for biobank research. However, from the standpoint of accountability, I do not believe this approach is sufficient to resolve ethical issues concerning “consent and ownership.” What is “best” is precisely what needs to be adjudicated, and what is best from the perspective of privacy may not be best from the perspective of research access. The resolution of the normative issues could turn less on the arguments than on the numbers in which these perspectives are represented among those brought together to do “this work.” In any event, I believe that these issues are of great enough social importance that they require broader public discussion and debate than could occur in a working group.

Biobank research differs significantly enough from other research to call into question the adequacy of the existing framework for research, particularly when one puts the privacy threats it poses into the broader context of privacy in contemporary society. The following considerations are particularly noteworthy:

  • The collection of highly sensitive and revealing information for research biobanks — particularly as linked with information in other databases and facilitated by information technologies — is increasingly extensive.
  • The loss of privacy, and the potential harms to individuals, groups and society that may occur thereby, is beyond what has been contemplate d in research ethics and its traditional oversight regime.
  • The extent of information flow and indirect collection within the health system (and as may be of interest to, and captured by biobanks) has increased to the point that not even experts, let alone ordinary citizens, have a good understanding of data flows.71
  • Data protection safeguards and fair information principles developed in recognition of current trends and practices concerning information technology have not been integrated into research ethics and its traditional oversight regime.
  • The complexity of biobanks and the networks in which they are being created has not been contemplated in research ethics and its oversight regime and exceeds its scope and capacity.
  • The creation of research biobanks goes beyond what can be construed as the activity of research traditionally understood. Large-scale research biobanks are as much organizations in their own right as they are parts of research projects, and in consequence are subject to norms that apply to organizations. It is not simply a matter of holding individual researchers accountable, but of holding organizations accountable.
  • Research biobanks may bring together as collaborators a variety of individuals and organizations, reflecting a variety of interests and agendas, who cannot be held accountable by research ethics and oversight regimes.
  • The issues acquiring adjudication, and particularly issues pertaining to the selfdetermination of individuals and communities, are becoming more and more pressing in light of increased information collection. These issues are of fundamental importance in a free and democratic society and bear upon the consent of the governed. As such, the locus of moral authority for their resolution extends beyond research ethics and its traditional oversight mechanisms.

The study by McDonald (2000) and others for the Law Reform Commission of Canada on The Governance of Health Research Involving Human Subjects identifies some serious shortcomings of the existing framework for research ethics in Canada. The adequacy of this framework with respect to accountability is already seriously in question, even as assessed without specific focus on biobank research and the novel challenges it poses.

There are many sorts of provisions that go to make up a comprehensive privacy framework, including policy pronouncements, oversight bodies and security provisions. There is reason to question these provisions, individually and in total. For example, there are many policy issues, as well as many policy documents — often conflicting -- and a vast literature that addresses privacy, research and biobanking. Policy pronouncements (laws, regulations, guidelines, professional codes and privacy codes) are inconsistent as concerns specific provisions for consent and even key definitions (e.g., “anonymous”). Many are sufficiently ambiguous that partisans on opposing sides of a given issue can find support for their cause. Different pronouncements are made under the auspices of different bodies (e.g., governments, nongovernmental organizations, councils, professional associations and advocacy groups) and vary with respect to who and what activities they capture, the force they have and the mechanisms for enforcement, monitoring and oversight. The legitimacy of these pronouncements may be more or less questionable with respect to whether, or to what extent, they duly and authentically incorporate the voice of communities and populations effected.

It is questionable whether the guidance in the Tri-Council Policy Statement (Medical Research Council of Canada, Natural Sciences and Engineering Research Council of Canada, and Social Sciences and Humanities Research Council of Canada, 1997) is sufficiently unambiguous to resolve important issues having to do, for example, with the protocol for anonymous information. It is questionable in any event what that guidance should be on this and other issues. It is noteworthy that the document does not reference or mandate privacy impact assessments, which have emerged as a “best practice” for ensuring that privacy-related issues are addressed in database design and management. They also help to promote transparency and accountability by making data flow and policy decisions explicit.

Is such consensus as has been achieved in this document the consensus of today? Was it even the consensus of yesterday? Has the consensus been achieved in full understanding of the kinds of issues that are posed by biobank research? Is whatever consensus has been achieved on matters such as consent — which is of fundamental importance in a free and democratic society — consistent with democratic processes?

The oversight of ethics review may be (and often is) invoked reassuringly to assuage concerns about privacy and biobank research. Ethics review is of course a very important safeguarding provision. However, such reassurances are not very assuaging if, or to the extent that, one does not believe that the policy guidance the reviewers will use to adjudicate the issues (if the issue is recognized as one that requires ethics review for research in the first place) is adequate.

Moreover, the protocols for research and gene banking are very complex and strain both the capacity and competence of research ethics boards. The U.S. General Accounting Office’s (1999) report on privacy oversight with respect to research and medical records noted that the “IRB system is already heavily burdened” and “that the IRB system has limitations that need to be highlighted as policy makers consider expa nding its responsibilities” (p. 22). No such study has been done in Canada, but there is reason to believe the limitations are no less significant, and perhaps more so.

The issue is not only one of capacity, but also one of competence. Research ethics boards cannot, and should not, be expected to bring the kind of scrutiny to bear upon research biobanks that data commissioners are able to do. In some regimes (e.g., Estonia), data commissioners are an integral part of the regulatory framework for research and biobanks. In Canada, no such provisions exist.

Distinct from both capacity and competence there are issues of independence. The independence of ethics review with respect to various interests — particularly given the extensive range of interests that may be involved in biobank research — is pertinent to the rigourousness and scrupulousness about privacy this process brings to bear on research protocols. McDonald (2000, p. xii), noting concerns about the independence of IRBs in the U.S. context, suggests that “the pressures on the independence of Canadian REBs” is “much greater . . . than in the U.S.”

Beyond these considerations, it is important to note that the phenomenon of research and biobanking is not reducible to research protocols. In important respects, large-scale biobanks are less like research in the traditional sense than they are like institutions and even businesses. Protocol review does not, and cannot possibly, capture features of research and biobanking that may be of privacy concern.72

Finally, even supposing that all of the issues identified above were addressed (as they can be in some measure at least), an important question remains. Is ethics review the right locus of responsibility and authority for resolving unresolved societal questions of this nature? Is this the right place for the call to account?

In the final analysis, what is most at issue as concerns privacy and biobank research is accountability. What provisions are in place to ensure that information flows only as authorized? Who has the right to do this authorizing? How transparent is the system? What oversight is there by which to hold data stewards accountable? What public processes exist to ensure that initiatives develop with due regard for the interests and rights of the community or population having a stake in research and biobanks, and with public input as appropriate in a free and democratic society?

Top

References

Abraham, C. 1998. “World Gene Hunt Targets Canada: The Unique Gene Pool in Newfoundland’s Outports Has Attracted Researchers Looking to Make a Fortune. But Who Owns the DNA?” The Globe and Mail, Nov. 28.

Allen, Anita. 1997. “Genetic Privacy: Emerging Concepts and Values.” In Genetic Secrets: Protecting Privacy and Confidentiality in the Genetic Era, edited by Mark Rothstein. New Haven, CT: Yale University Press.

Alpert, Sheri. 2000. “Privacy and the Analysis of Stored Tissues.” In Research Involving Human Biological Materials: Ethical Issues and Policy Guidance. Vol. 2, Commissioned Papers. Rockville, MD: National Advisory Commission. http://www.georgetown.edu/research/nrcbl/nbac/hbmII.pdf

Anderlik, M. R. and M. A. Rothstein. 2001. “Privacy and Confidentiality of Genetic Information: What Rules for the New Science?” Annual Review of Genomics and Human Genetics 2: 401–33.

Anderson, Ross. 1999. “Comments on the Security Targets for the Icelandic Health Database.” http://www.cl.cam.ac.uk/ftp/users/rja14/iceland-admiral.pdf

Annas, George. 1993. “Privacy Rules for DNA Databanks: Protecting Coded ‘Future Diaries.’” JAMA: The Journal of the American Medical Association 270: 2346–50.

Annas, George. 2000.“Rules for Research on Human Genetic Variation — Lessons from Iceland.” New England Journal of Medicine 342: 1830–33.

Annas, George. 2001. “Reforming Informed Consent to Genetic Research.” JAMA: The Journal of the American Medical Association 286: 2326–28.

Annas, George and M. Grodin, eds. 1992. The Nazi Doctors and the Nuremberg Code: Human Rights in Human Experimentation. New York, NY: Oxford University Press.

Arnason, E. 2002. “Personal Identifiability in the Icelandic Health Sector Database.” The Journal of Information, Law and Technology (JILT) 2. http://elj.warwick.ac.uk/jilt/02_2/arnason.html

Benn, Stanley. 1971. “Privacy, Freedom and Respect for Persons.” In Nomos XIII: Privacy, edited by Roland J. Pennock and John W. Chapman. New York, NY: Atherton Press.

Boucher, Lesley, D. Cashaback, T. Plumptre and A. Simpson. 2002. “Linking-In, Linking Out, Linking Up: Explaining the Governance Challenge of Biotechnology.” Ottawa, ON: Institute On Governance. http://www.iog.ca/publications/biotech.pdf

Burgess, Michael and Fern Brunger. 2000. “Negotiating Collective Acceptability of Health Research.” Section D of The Governance of Health Research Involving Human Subjects , Michael McDonald, Principal Investigator. Ottawa, ON: Law Reform Commission of Canada. http://www.lcc.gc.ca/en/themes/gr/hrish/macdonald/macdonald.pdf

Canada. 2000. Personal Information Protection and Electronic Documents Act. S.C. 2000, c.5. http://laws.justice.gc.ca/en/P -8.6/

Canadian Charter of Rights and Freedoms. 1982. Part I of the Constitution Act, 1982, being Schedule B of the Canada Act 1982 (U.K.), c.11.

Canadian Institutes of Health Research. 2001. Recommendations for the Interpretation and Application of the Personal Infor mation Protection and Electronic Documents Act (S.C. 2000, c.5) in the Health Research Context. http://www.cihr-isc.gc.ca/e/193.html.

Canadian Medical Association. 1998. Health Privacy Code. CMA Journal 159:997.

Canadian Standards Association. 1996. Model Code for the Protection of Personal Information: A National Standard of Canada. CAN/CSA-Q830-96. Mississauga, ON: Canadian Standards Association.

Caulfield, Timothy, Ross E. G. Upshur and Abdallah Daar. 2003. “DNA databanks and consent: A suggested policy option involving an authorization model.” BMC Medical Ethics, 4: 1.

Chadwick, Ruth. 1999. “The Icelandic Database: Do Modern Times Need Modern Sagas?” BMJ 319: 441–44.

Commission on the Future of Health Care in Canada (Roy Romanow). 1993. Building on Values: The Future of Health Care in Canada – Final Report. http://www.hc-sc.gc.ca/english/pdf/romanow/pdfs/HCC_Final_Report.pdf

Council of Standards of Australia. 1995. Australian Standard: Personal Privacy Protection in Health Care Information Systems (AS 4400-1995). Homebush, NSW: Standards Association of Australia.

Cragg Ross Dawson [Polling Firm]. 2000. Public Perceptions of the Collection of Human Biological Samples. London, UK: Wellcome Trust/Medical Research Council. http://www.wellcome.ac.uk/en/images/biolcoll_3182.pdf

Dickens, Bernard. 2000. “Legal Issues.” Section C of The Governance of Health Research Involving Human Subjects, Michael McDonald, Principal Investigator. Ottawa, ON: Law Reform Commission of Canada. http://www.lcc.gc.ca/en/themes/gr/hrish/macdonald/macdonald.pdf

Eiseman, Elisa. 2000. “Stored Tissue Samples: An Inventory of Sources in the United States.” In Research Involving Human Biological Materials: Ethical Issues and Policy Guidance. Vol. 2, Commissioned Papers. Rockville, MD: National Advisory Commission. http://www.georgetown.edu/research/nrcbl/nbac/hbmII.pdf

Erikson, Stefan. 2001. “Informed Consent and Biobanks: In the Interests of Efficiency and Integrity.” In The Use of Human Biobanks: Ethical, Social, Economical and Legal Aspects, edited by Mats Hanson. Uppsala, Sweden: Uppsala Universitet. http://www.bioethics.uu.se/biobanks -report.html

Ess, Charles, and the Association of Internet Researchers (AoIR) Ethics Working Committee. 2000. Ethical Decision Making and Internet Research: Recommendations from the AoIR Ethics Working Committee. Approved by AoIR membership 11/27/02. http://www.aoir.org/reports/ethics.pdf

Estonian Gene Project. 2001. Gene Donor Consent Form. Annex 1, Regulation 125, Dec. 17, 2001, Minister of Social Affairs. http://www.geenivaramu.ee/index.php?lang=eng⊂=74

Estonian Gene Project. 2003. Information about the Gene Donor Consent Form. http://www.geenivaramu.ee/index.php?lang=eng⊂=75

Etzioni, Amitai. 1999. The Limits of Privacy. New York, NY: Basic Books.

Everett, Margaret. 2003. “The Social Life of Genes: Privacy, Property and the New Genetics.” Social Science and Medicine 56: 53–65.

Faden, Ruth R. and Tom L. Beauchamp in collaboration with Nancy M. P. King. 1986. A History and Theory of Informed Consent. New York, NY: Oxford University Press.

Freedman, Benjamin. 1975. “A Moral Theory of Consent.” Hastings Center Report 5: 4.

Freeman, Phylis and Anthony Robbins. 1999. “The U.S. Health Data Privacy Debate: Will There Be Comprehension Before Closure?” International Journal of Technology Assessment in Health Care 15: 316–30.

Fried, Charles. 1984. “Privacy: A Moral Analysis.” In Philosophical Dimensions of Privacy: An Anthology, edited by F. D. Shoeman. Cambridge, UK: Cambridge University Press.

Garfinkel, Simon and Deborah Russel. 2000. Database Nation: The Death of Privacy in the 21st Century. Cambridge, UK: O’Reilly and Associates.

Gavison, Ruth. 1984. “Privacy and the Limits of the Law.” In Philosophical Dimensions of Privacy: An Anthology, edited by F. D. Schoeman. Cambridge, UK: Cambridge University Press.

Geneforum. 2001. The Geneforum Leadership Survey on Genetic Privacy: Executive Summary. http://www.geneforum.org/learnmore/resources/fowlerg_andersonb_200104.pdf

Geneforum. 2003. http://www.geneforum.org

GeneWatch. 2002. Human Genetics — Parliamentary Briefings. http://www.genewatch.org/HumanGen/Publications/MP_Briefs.htm#MP_3

GeneWatch. 2003. http://www.genewatch.org

Graham, John. 2003. “Strengthening Democracy in Canada: Reinventing the Town Hall Meeting.” Ottawa, ON: Institute On Governance. http://www.iog.ca/publications/TownHall2.pdf

Gulcher, J. and K. Stefansson. 2000. “The Icelandic Healthcare Database and Informed Consent.” The New England Journal of Medicine 342: 1827–30.

Hanson, Mats. 2001. “In the Interests of Efficiency and Integrity.” In The Use of Human Biobanks: Ethical, Social, Economical and Legal Aspects, edited by Mats Hanson. Uppsala, Sweden: Uppsala Universitet. http://www.bioethics.uu.se/Biobanks-report.html

Harry, Debra, Stephanie Howard and Brett Lee Shelton. 2000. Indigenous Peoples, Genes and Genetics: What Indigenous Peoples Should Know About Biocolonialism. Wadsworth, NV: Indigenous People’s Council on Biocolonialism. http://www.ipcb.org/pdf_files/ipgg.pdf

Hodge, James G., Jr. 2003. “Health Information and Privacy.” Journal of Law, Medicine and Ethics 31. 663–71.

Interagency Advisory Panel on Research Ethics (PRE). 2002. Process and Principles for Developing a Canadian Governance System for the Ethical Conduct of Research Involving Humans. http://www.pre.ethics.gc.ca/english/publicationsandreports/publicationsandreports/positionpaper. cfm

Johnson, Deborah. 2001. Computer Ethics. 3rd ed. Princeton, NJ: Prentice-Hall.

Kaiser, Jocelyn. 2002. “Population Databases Boom, From Iceland to the U.S.” Science 298: 1158–61.

Knoppers, B. M., M. Hirtle, S. Lormeau, C. M. Laberge and M. Laflamme. 1998. “Control of DNA Samples and Information.” Genomics 50: 385–401.

Lowrance, William. 2002. Learning From Experience: Privacy and the Secondary Use of Data in Health Research. London, UK: The Nuffield Trust.

Mannverend: The Association of Icelanders for Ethics in Science and Medicine. 2003. http://www.mannvernd.is/english/home.html

McDonald, Michael. 2000. “Ethics and Governance.” Section A of The Governance of Health Research Involving Human Subjects, Michael McDonald, Principal Investigator. Ottawa, ON: Law Reform Commission of Canada. http://www.lcc.gc.ca/en/themes/gr/hrish/macdonald/macdonald.pdf

McInnis, M. 1999. “The Assent of a Nation: Genethics and Iceland.” Clinical Genetics 55: 234– 38.

McLean, Deckle. 1995. Privacy and Its Invasion. Westport, CT: Praeger.

Medical Research Council of Canada, Natural Sciences and Engineering Research Council of Canada, and Social Sciences and Humanities Research Council of Canada.1998. Tri- Council Policy Statement: Ethical Conduct for Research Involving Humans. http://www.pre.ethics.gc.ca/english/policystatement/policystatement.cfm

Merz, Jon F., Glenn E. McGee and Pamela Sankar. 2004. “Iceland Inc.? On the Ethics of Commercial Population Genetics.” Social Science and Medicine 58: 1201–09.

Moor, James. 1997. “Toward a Theory of Privacy in the Information Age.” Computers and Society 28:14–21.

Moor, James. 1999. “Using Genetic Information While Protecting the Privacy of the Soul.” Ethics and Information Technology 1: 257–63.

Naser, Curtis. 1997. “High Speed Genetic Testing Technology and the Computerization of the Medical Record.” http://www.math.luc.edu/ethics97/papers/Naser.html

Naser, Curtis and Sherri Alpert. 2000. Protecting the Privacy of Medical Records: An Ethical Analysis. Portland, ME: National Coalition for Patient Rights. http://www.nationalcpr.org. (Link not active as of April 15, 2003.)

Network of Applied Genetic Medicine. 2000. Statement of Principles: Human Genome Research. Version 2000. http://www.rmga.qc.ca/default.htm

Network of Applied Genetic Medicine. 2003. Statement of Principles on the Ethical Conduct of Human Genetic Research Involving Populations. Version 2003. http://www.rmga.qc.ca/default.htm

Nissenbaum, Helen. 1998. “Protecting Privacy in an Information Age: The Problem of Privacy in Public.” Law and Philosophy 17: 559–96.

Olafsson, Sveinn. 2002. “Information Policy Disputes in Iceland.” International Information and Library Review 34: 79–95.

Pomfret, John and Deborah Nelson. 2000. “In Rural China: A Genetic Mother-lode.” The Washington Post, Dec. 20, p. A01.

Powers, Madison. 2002. “Privac y and the Control of Genetic Information.” In Ethical Issues in Biotechnology, edited by Richard Sherlock and John Morley. Oxford, UK: Rowman and Littlefield.

Quebec. Commission de l’éthique de la science et de la technologie. 2003. The Ethical Issues of Genetic Databases: Towards Democratic and Responsible Regulation (Position Statement: Summary and Recommendations). Quebec, QC: Commission de l’éthique de la science et de la technologie. http://www.ethique.gouv.qc.ca/eng/ftp/Avis10-02-03.pdf

Radwanski, George (then Privacy Commissioner of Canada). 2002. “Privacy in Health Research: Sharing Perspectives and Paving the Way.” Web site of the Privacy Commissioner of Canada, Nov. 14. http://www.privcom.gc.ca/speech/02_05_a_021114_e.asp

Regan, Priscilla. 1995. Legislating Privacy: Technology, Social Values and Public Policy. Chapel Hill, NC: The University of North Carolina Press.

Regan, Priscilla. 2003. “The Role of Consent in Information Privacy Protection.” In Considering Consumer Privacy: A Resource for Policy Makers and Practitioners, edited by Paula Bruening. Washington, DC: Center for Democracy and Technology. http://www.cdt.org/privacy/ccp/consentchoice2.shtml/pdf

Robertson, John. 1999. “Privacy Issues in Second Stage Genomics.” Jurimetrics Journal 40: 59–76.

Rolleston, Francis. 2002. Final Report: Scoping a Gene Bank Inventory. Ottawa, ON: Industry Canada, Contract 5002843.

Rosen, Christine. 2003. “Liberty, Privacy and DNA Data bases.” The New Atlantis: A Journal of Technology and Society (Spring). http://www.thenewatlantis.com/archive/1/rosen.htm

Rosen, Jeffrey. 2000. The Unwanted Gaze: The Destruction of Privacy in America. New York, NY: Random House.

Ruebhausen, Oscar M. and Orville G. Brim, Jr. 1966. “Privacy and Behavioural Research.” American Psychologist 21: 423–28.

Seltzer, W. 1998. “Population Statistics, the Holocaust and the Nuremberg Trials.” Population and Development Review 24: 511–52.

Stacey, Kristina. 2001. Giving Your Genes to Biobank UK: Questions to Ask. Report for Genewatch UK. http://www.genewatch.org/HumanGen/Publications/Reports/BioRport.pdf

Sweeney, Latanya. 1998. Presentation on Re -identification of De-identified Medical Data. Washington, DC: National Committee on Vital and Health Statistics, Subcommittee on Privacy and Confidentiality. http://ncvhs.hhs.gov/980128tr.htm

Sweeney, Latanya. 2001. “Information Explosion.” In Confidentiality, Disclosure and Data Access: Theory and Practical Applications for Statistical Agencies, edited by P. Doyle, J. Lane , J. Theeuwes,and L. Zayatz. Washington, DC: Urban Institute, in conjunction with the U.S. Bureau of the Census.

Sykes, Charles. 1999. The End of Privacy. New York, NY: St. Martin’s Press.

Uehling, Mark. 2003. “Decoding Estonia.” Bio-IT World, Feb. 10. http://www.bio - itworld.com/archive/021003/decoding.html

UNESCO. 1997. Universal Declaration on the Human Genome and Human Rights. http://portal.unesco.org/en/ev.php@URL_ID=13177&URL_DO=DO_TOPIC&URL_SECTION =201.html

UK Biobank. 2002. Draft Protocol for the UK Biobank: A Study of Genes, Environment and Health. http://www.ukbiobank.ac.uk/documents/draft_protocol.pdf

United Kingdom. House of Commons. Select Committee on Science and Technology. 2003. The Work of the Medical Research Council: Third Report of Session 2002 -2003. London, UK: The Stationary Office Limited. http://www.publications.parliament.uk/pa/cm200203/cmselect/cmsctech/132/132.pdf

United States. Department of Health, Education and Welfare. Secretary’s Advisory Committee on Automated Personal Data Systems, Records, Computers and the Rights of Citizens. 1973. The Code of Fair Information Practices. Washington, DC: DHEW.

United States. General Accounting Office. 1999. Medical Records Privacy: Access Needed for Health Research, but Oversight of Privacy Protections is Limited. GAO/HEHS 99-55, B -280657. Washington, DC: GAO.

Verhoef, Marja, Raymond Lewkonjia and Douglas Kinsella. 1996. “Ethical Implications of Current Practices in Human DNA Banking in Canada.” In Legal Rights and Human Genetic Material, edited by Bartha Maria Knoppers, Timothy Caulfield and T. Douglas Kinsella. Toronto, ON: Emond Montgomery Publications Ltd.

Warren, Samuel D. and Louis D. Brandeis. 2001. “The Right to Privacy.” In Today’s Moral Issues, 4th ed., edited by Daniel Bonevac, pp. 274–83. Boston, MA: McGraw-Hill. Originally published 1890, 4 Harvard Law Review 193.

Weijer, Charles. 1999. “Protecting Communities in Research: Philosophical and Pragmatic Challenges.” Cambridge Quarterly of Healthcare Ethics 8: 501–13.

Weir, Robert. 2000. “The Ongoing Debate about Stored Tissue, Samples, Research and Informed Consent.” In Research Involving Human Biological Materials: Ethical Issues and Policy Guidance. Vol. 2, Commissioned Papers. Rockville, MD: National Advisory Commission. http://www.georgetown.edu/research/nrcbl/nbac/hbmII.pdf

Weir, Robert and T. Horton. 1995. “DNA Banking and Informed Consent. Part 2.” IRB: A Review of Human Subjects Research 17: 1–8.

Westin, Alan. 1984. “The Origin of Modern Claims to Privacy.” In Philosophical Dimensions of Privacy: An Anthology, edited by Ferdinand Schoeman. Cambridge, UK: Cambridge University Press.

Whitaker, Reg. 1999. The End of Privacy: How Total Surveillance is Becoming a Reality. New York, NY: New Press.

Yeo, Michael. 1996. The Ethics of Public Participation. In Efficiency Versus Equality: Health Reform in Canada, edited by Michael Stingl and Donna Wilson. Halifax, NS: Fernwood Publishing.

Yeo, Michael. 2003. “Romanow’s Prescription Could Be a Privacy Nightmare.” Ottawa Citizen.

Yeo, Michael and Andy Brooks. 2003. “The Moral Framework of Confidentiality and the Electronic Panopticon.” In Confidential Relationships: Psychoanalytic, Ethical and Legal Contexts, edited by Christine Koggel, Allannah Furlong and Charles Levin. Amsterdam, Netherlands: Rodopi Press.

Top

1 Although key works in the relevant literature and policy documents on privacy, biobanking, and research were reviewed in preparing this paper and are discussed throughout, it is not intended as a literature review. For an outline and overview of key policy issues concerning biobanking, see Anderlik and Rothstein (2001), Alpert (2000) or Weir (2000).
2 I do not intend this discussion to be a thorough or systematic review of the extensive literature on privacy. Alpert (2000) has a succinct, useful overview of this literature (see esp. pp. A6–12).
3 Nissenbaum (1998), for example, discusses how information technologies give rise to the issue of “privacy in public places,” which issue was not recognized by earlier accounts of privacy because public places were not pervaded by collection and monitoring technologies as they are today. Nissenbaum argues in effect that the concept of privacy needs to be expanded in the face of novel technologies (i.e., to comprehend the problem of “privacy in public”).
4 I use the Code of Fair Information Principles (1973) as my main reference point because it marks the beginning of the evolution of fair information principles. It contains the following five principles: “(1) There must be no personal data record-keeping systems whose very existence is secret; (2) There must be a way for a person to find out what information about the person is in a record and how it is used; (3) There must be a way for a person to prevent information about the person that was obtained for one purpose from being used or made available for other purposes without the person’s consent; (4) There must be a way for a person to correct or amend a record of identifiable information about the person; (5) Any organization creating, maintaining, using or disseminating records of identifiable personal data must assure the reliability of the data for their intended use and must take precautions to prevent misuses of the data.”
5 The failure to acknowledge (or even cover over) this tension is common and typically occurs in tandem with a confusion about privacy that this paper is largely concerned to explicate. An example here should help to clarify this point and why it matters for how privacy issues are framed, debated and resolved. In framing the tension between privacy and the public health need for access to personal information, Hodge (2003, p. 663) describes the privacy side of the tension as having to do with “concerns” relating “to misuses or wrongful disclosures of sensitive health data that can lead to discrimination and stigmatization against individuals.” Note that Hodge equates privacy with, or reduces it to, “misuse” or “wrongful disclosure.” However, he uses these terms not in relation to consent (e.g., use without consent is “misuse”; disclosure without consent “wrongful disclosure”) but rather as relates to the potential for adverse consequences for the data subject (“discrimination and stigmatization”). Privacy thus represented, it is no wonder that privacy and public health access only “seem at odds” and that moreover “[p]rotecting privacy is . . . essential to protecting the public’s health” (p. 663). For Hodge, “protecting privacy” is reduced to non-maleficence or ensuring that data subjects will not be harmed in consequences of access to their data. Assurances to this effect are also necessary in that the failure “to respect the sensitivity and privacy of a person’s health information predictably leads individuals to avoid, or limit their participation in, public health programs, human subjects research and clinical care” (p. 663). The situation appears quite different, and the tension between privacy and access more pronounced, if one holds self -determination privacy in view instead of security privacy (which is how Hodge represents privacy).
6 In the pursuit of knowledge, researchers rightly seek to link information from diverse sources to gain as complete a picture of the matter at hand as is possible and relevant to their interest. Researchers appreciate well that the significance of a given piece of information may change considerably when that information is linked to other pieces of information and a composite picture is thereby formed. For similar reasons, it is important to situate biobank research in the context of the vast array of persons and organizations seeking access to information in contemporary society.
7 Increasing anxiety about privacy is reflected in the apocalyptic titles of four recent books published at the turn of this century: The Unwanted Gaze: The Destruction of Privacy in America (Rosen, 2000); Database Nation: The Death of Privacy in the 21st Century (Garfinkel and Russel, 2000); The End of Privacy (Sykes 1999); and The End of Privacy: How Total Surveillance is Becoming a Reality (Whitaker 1999). These books contain numerous examples of emerging threats to privacy.
8 I make no claims here about the intensity, prevalence and distribution of these concerns and worries among individuals, groups and communities. Surveying and polling various publics about privacy has emerged as a growth industry in recent years, and there is an extensive literature on the subject. Suffice to say that some people appear not to be very concerned about these things, others are more or less concerned, and some are concerned because others appear not to be as concerned as they should be. For my purposes, it is enough to enumerate these worries and concerns, the ethical significance of which cannot be reduced to public opinion in any event.
9 For example, there are also law enforcement biobanks. The recent evolution of these biobanks would make an interesting study. The most controversial issues pertain to the statutory requirement to provide DNA for the creation of a DNA profile. How this requirement is limited (e.g., required of which categories of offenders for which type of offences) varies from jurisdiction to jurisdiction. As short as the recent history of these biobanks has been, one can discern that change in the limitations on this mandated requirement is moving step-by-step along a trajectory of increased permissiveness. James Watson has recently proposed that DNA samples should be collected from everyone (in Europe at least). Canada’s National DNA Bank was enabled by legislation proclaimed in 2000 and is quite strictly limited, although there is lobbying to loosen the limitations. Slippery slope arguments here would appear to have considerable empirical support. The slide of Virginia database for stored DNA is an interesting case in point. Rosen (2003) relates: “The first Virginia database stored DNA samples only from convicted sex offenders, but within a year, the law had expanded to require DNA samples from all adult felons. Juveniles over the age of fourteen who committed serious crimes were added in 1996, and beginning in January 2003, any person arrested for a violent felony or burglary must give the state their DNA.”
10 Johnson (2001) identifies the following ways in which computer and information technology has changed record-keeping (albeit generically and not with reference to health care): “(1) It has made a new scale of information gathering possible; (2) It has made new kinds of information possible, especially transaction generated information; (3) It has made a new scale of information distribution and exchange possible; (4) The effect of erroneous information can be magnified; (5) Information about events in one’s life may endure much longer than ever before. These five changes make the case for the claim that the world we live in is more like a panopticon than ever before” (p. 117). I would add that, because of the sorts of things Johnson lists, computerization has enhanced the value of the information, and thus increased demand. It has also increased the potential for the secondary use of information to occur without our knowledge or consent.
11 In the case of information initially collected under the trust of the therapeutic relationship and its strong traditions of confidentiality for the primary purpose of receiving care, the privacy issue is even more pronounced, and the potential for loss of trust a significant concern. Of course, the computerization of my health information can also enhance the ability of my health providers to provide direct care to me. However, this purpose is quite distinct from the computerization of my health information for secondary purposes. Not infrequently, this distinction is collapsed in argument such that the primary purpose fronts for secondary purposes that may be less compelling and publicly acceptable. See Yeo (2003).
12 A requirement to obtain consent, or even to ensure knowledge, constitutes an impediment to secondary use. Predictably, secondary users seek to exempt themselves from a consent requirement and, where a consent requirement exists at the point of original collection, demand that the requisite consent be sufficiently flexible to authorize them to have access for their purpose. The lobbying in connection with Personal Information Protection and Electronic Data Act (PIPEDA) illustrates this very well. Researchers, research organizations and organizations championing the research interest were most concerned about the provisions for consent and lobbied to exempt research (or health care more broadly) from PIPEDA’s reach, or, in the alternative, to ensure that the consent provisions would not impede research. Having failed to win amendments, the lobbying shifted to the interpretation and application of PIPEDA, particularly as concerns consent. This lobbying orientation is evident in the Cana dian Institutes of Health Research (2001) Recommendations for the Interpretation and Application of the Personal Information and Protection of Electronic Documents Act.
13 I use the triplet “desire, need or demand” to note what I take to be a matter of empirical fact. Increased demand for access to information is a factual matter in much the same way that demand for a particular product or service in the marketplace is. However, it is noteworthy that the term “demand” can also be used with a claimed normative force (for example, as when a police officer “demands” one’s licence. These two senses of demand are conflated when researchers, functioning politically as lobbyists for research access in public policy, represent their information desires or needs virt ually as “demands” (i.e., claims asserted as a matter of right).
14 This is the working definition specified by the Canadian Biotechnology Advisory Committee for the purpose of this paper. These specimens and data, I would emphasize, come from individuals — flesh and blood human beings — and indeed the first aspect of this definition literally includes the flesh and blood of human beings.
15 A collection of samples gathered for research purposes is a research biobank. In some cases, these samples may be collected for a discrete research project; in others, as a resource for a broader research program (such as in population genetics) that may facilitate any number of discrete research projects involving many researchers (situated in different networks, with different institutional and organizational affiliations). Much research biobank collecting has been indirect and retrospective, pulling in samples from other biobanks, primarily clinical care biobanks, without the knowledge or consent of the persons from whom the samples came. In some cases, the samples have been de -identified or key-coded by the original biobanker prior to collection in the research biobank; in others not. The definition here includes both biobanks that have been instituted (or may be regulated) for the purpose of research and, under this understanding or agreement (research biobanks), biobanks that were originally instituted for (and may be regulated for) other purposes, such as law enforcement or clinical care (clinical care biobanks). Although not all biobanks have been instituted for research, the research interest captures all biobanks: clinical care biobanks, law enforcement biobanks, etc. Conversely, the interest of other biobankers (e.g., law enforcement) likewise captures all biobanks, including research biobanks. It is not unreasonable to suppose that all research biobanks will eventually be accessible for law enforcement purposes.
16 It may be that the scope of biobanks thus defined is so broad as to be unwieldy. For example, if urine specimens collected in a physician’s office constitute a collection, it seems odds to call the office a biobank. In the interest of conceptual clarity, therefore, it may be useful to distinguish different types of collection and to define the term “biobanks” more restrictively; for example, by referencing such things as the circumstances or purposes under which the collection has come to exist and the temporal horizon of the collection. In any event, the broad and inclusive scope of the present definition has the advantage of circumscribing all information research may seek to capture, whether originally collected for purposes of research or not, which narrower definitions may fail to do. For example, Rolleston’s (2002, p. 1) definition specifically excludes samples or information which cannot “be linked by reasonable means with the individuals from which they were obtained.” Such a restrictive definition occludes a number of important questions and issues central to privacy and biobank research.
17 Research is becoming increasingly complex in ways that bear upon normative issues. Interpretive questions about the nature of research go beyond what constitutes research vis-à-vis different methods or approaches (e.g., differences between qualitative and quantitative research, or between observational research and randomized clinical trials). Boundaries are becoming increasingly blurred in the context of complex health information systems. What counts as research, what counts as research involving human subjects, who counts as a researcher and who or what counts as a research subject, all make a difference as to what rules do or should apply. Where does one draw a line between research and health surveillance or monitoring? At what point does quality assurance or even health system management end and research begin? What distinguishes a disease registry from a research database? How does one distinguish a patient from a research subject in the context of a health information network? At what point does the patient or the research subject vanish into bits or bytes or pieces or strands such that one no longer speaks of research involving human subjects? How do we differentiate clinical care and research in the context of drug utilization and feedback systems? At what point does a collection of information become a database, and at what point does a database become a business, or the activity of research a commercial activity? All of these questions have implications relevant to norms, including implications about which norms are brought to bear and whether certain norms are brought to bear at all.
The practice of research is changing. Increasingly, research employs information technologies and techniques like data mining and knowledge development in databases (KDD). Health services research, population health research and population genetics involve increasingly extensive and intensive collections of information, much of it indirect, from a variety of data sources. These sources span different regulatory regimes, and accountabilities are diffused.
The economic and social infrastructure for research is changing. It comprises an increasingly complex web of individuals and organizations from a variety of sectors, including industry, government, health care and academia. The interests, values and traditions of those who are invested in this infrastructure include the pursuit of knowledge, the advancement of medicine and health care, economic development and profit. Notwithstanding the diversity and even divergence of agendas, the various interests involved are interlinked and part of a phenomenon not reducible to the sum of its various parts. We know little about this phenomenon, and our ignorance by itself is reason for concern from the standpoint of transparency and accountability.
he lines between academic and industry research become more blurred in light of emerging funding arrangements and partnerships, as well as incentives and imperatives to commercialization and policy relevance. The meaning and goals of research itself become increasingly ambiguous. The traditional understanding of research as the indifferent and independent pursuit of generalizable knowledge is different from research understood as the acquisition and application of knowledge toward the achievement of some practical product, purpose or policy outcome of interest to funders or others.
Biobank research developments are in the vanguard of such changes. Initiatives are proceeding in Canada and elsewhere under a variety of funding and partnership arrangements, interlinked with health information systems, networks and databases. Increasingly, we are dealing not just with specific research projects — guided by this or that specific research question — but rather with the creation of large -scale databases that bring together a variety of interests. These databases resemble research projects less than they do research programs, or even organizations, institutions and, in some cases, businesses.
18 Powers (2002, p. 443) lists a variety of features of genetic information that, although not unique in his view, combine to illuminate a “larger set of concerns.” He writes: “What seems to magnify concerns about individual privacy is the increased potential for comprehensive, systematic and efficient collection of an abundance of medical information about a person.” Whether, or to what extent, genetic information is unique is a debatable point. I note that Powers does not give consideration to what I am calling the ontological status of genetic information, in light of which consideration one might reasonably form an opposite conclusion about its uniqueness.
19 This quote is from Harry et al. (2000, dedication page). Indigenous groups have been particularly vocal about genetics and research. “Many indigenous peoples regard their bodies, hair and blood as sacred elements and consider scientific research on these materials a violation of their cultural and ethical mandates. . . . Indigenous peoples have frequently expressed criticism of Western science for failing to consider the interrelatedness of holistic life systems, and for seeking to manipulate life forms using genetic technologies” (p. 20). It should be noted that these beliefs and values are by no means held only by indigenous peoples.
20 The distinction between self-determination privacy and security privacy is somewhat superficial as concerns these deeper philosophical questions. Nonetheless, I believe it is the best that is available to us, and not only useful but essential for bringing greater clarity and explicitness to contemporary discussion about privacy and biobank research.
21 In effect, the collection of specimens or data about person A is an indirect collection of information about persons B, C and D, or community E. These others may or may not agree with being thus indirectly revealed, even if A does.
22 An inventory of such adverse consequences (even if only a catalogue of reports in the media) would be useful for assessing risks, although whatever criteria it used for harm would be debatable. Rolleston (2002, 16–17) outlines (but does not reference) several Canadian examples that, even if they do not concern harms as such, are reason for concern. Everett (2003, p. 55) chronicles a number of prominent cases in the U.S., mainly having to do with insurance or insurability.
23 The sordid history of genetics — its complicity, and that of the medical and research communities more broadly, with racism, fascism, and eugenics — illustrates the potential for genetic knowledge to be used for malign purposes and underscores the gravity of the harms that could occur. Biobank research appears to have little affinity with these ideologies, but their vestiges persist in broader society and reasonable people may disagree in forecasting what force these ideologies may have in the future. See Annas and Grodin (1992) for a succinct and pointed discussion of this history relevant to research and research ethics. With respect to how the Nazis used available information registries and repositories to identify persons of interest to their programs, see Seltzer (1998). I am not here suggesting anything about the likelihood of any such abuse in the future.
24 Consent is a very important way of exercising informational self-determination or control. However, it is not the only way. For example, informational self-determination also comprehends our ability to access our own information, which is not captured by consent. Moreover, other terms with less connotation of passivity, such as “choice” may do an even better job of getting at the underlying value here.
25 For a synoptic review of how various policy statements about control of DNA samples and information compare, contrast and conflict, see Knoppers et al. (1998). The authors understate the differences among (and even within!) these various statements by referring to it as “heterogeneity” (p. 398). For a discussion of the U.S. policy context, see Weir (2000).
26 I interpret two recent inventory studies (Eiseman, 2000; Rolleston, 2001), neither of which squarely addresses this issue, to suggest that the practice is at least not uncommon. Unfortunately, Eiseman’s inventory of tissue sample repositories and banks does not clearly or synoptically indicate the conditions under which researchers may gain access to samples. Table 1 (Breast Cancer Specimen and Data Information System, p. D10–11) lists 14 repositories to which cancer researchers have access. Under the heading “Limitations,” a variety of descriptors are listed, including: “must sign confidentiality statement,” “cannot be used to create a commercial product.” The descriptor “must document IRB approval for use of human subjects” appears once, which suggests that the remaining 13 repositories have no such requirement. No descriptor is given for consent, I take to indicate that no such requirement exists. Rolleston’s inventory is likewise unspecific as concerns the issue at hand. However, statements like the following (which is specifically related to pathology samples) appear to indicate non-consensual research access: “The samples were generally collected within the clinical context, to which their consents (if any in older samples) were generally applied. Historically, such collections may have been accessed for research purposes with varying degrees of REB authorization and anonymization” (p. 12). The lack of explicitness about consent in these studies itself an issue.
27 For a discussion of presumed consent and related issues in the Icelandic database, see Merz et al. (2003), Gulcher and Stefansson (2000), Annas (2000), Chadwick (1999) and McInnis (1999). Mannverend (The Association of Icelanders for Ethics in Science and Medicine) has been a very active campaigner against presumed consent. Olaffson (2002) has a good discussion of the broader policy dimensions, including political a nd economic, of the Icelandic database.
28 It is difficult to determine the precise extent of biobanking that is occurring in Canada and elsewhere. In part, this is because few efforts have been made to inventory them but also because what counts as a biobank depends upon definitions. The few inventories of biobanks (using different definitions of exactly what it is they inventorying) do not provide a very thorough or systematic account of their operating rules and practices (e.g., concerning consent, governance, security standards). Eiseman’s (2000) inventory is probably the most comprehensive. Rolleston (2002) and to a lesser extent Verhoef et al., (1996) give at least some indication of the scope of the phenomenon in Canada.
29 With genetic information, information about one sibling can be very revealing about another sibling, the family or the community. To this extent, the information is also about, and perhaps in some sense even belongs to, the sibling, family or community, and not just the individual member. At any rate, the rights and interests of these others must be considered along with the rights and interests of the individual member. The issue is not limited to genetic information. Persons with HIV may be concerned about how information extracted from their group may be used contrary to their interests or values. Aboriginal people may be concerned that information gleaned from their community — even if accurate — may be used to perpetuate stereotypes that are hurtful to the community. For a sustained discussion of issues and challenges for research ethics (albeit not focussed on group privacy as such) in recognizing groups, collectives or communities, see Annas (2001), Burgess and Brunger (2000) and Weijer (1999).
30 The case is different yet again where both my consent and the agreement of some other or others is required, absent whose agreement my consent is not sufficient. This case may arise when my siblings are implicated in information about me and claim a moral right or interest in the information sufficient to override my decision to allow research access to it. This is not, in fact, a proxy situation at all, since the other or others are not in any sense representing me, imperfectly or otherwise.
31 If individual rights, and provisions to ensure them, are inadequate to address novel challenges posed by population-based research, this does not mean they are not necessary. It is one thing to say that the consent of the individual must be supplemented by that of the community; it is something altogether different, and does not follow at all, to say that the community’s consent obviates the need for individual consent. Bernard Dickens (2000), in a discussion of individual versus democratic consent, writes: “This is not non-consensual research, even though individuals subjected to it have not given their individual consent. It is based on consent, given by a democratically constituted legislature” (p. 1000). This way of speaking is problematic and potentially misleading. If the concept of consent can be used in this way to warrant calling research without individual consent (or indeed, even with individual objection or refusal) “consensual,” this certainly does not mean that consent can be imputed to the individuals concerned or that this research is any less non-consensual from the individual standpoint. Nor does it obviate the need for consent.
32 It is telling that Rolleston’s 2002 study was commissioned by Industry Canada and not Health Canada.
33 Here and elsewhere in describing the values of research beyond those focussed on the protection of human subjects (rights and welfare; self-determination and non-maleficence), I give prominence to public good beneficence recognizing that, historically, research has had a strong commitment to the pursuit of t ruth or generalized knowledge, without regard for the consequences (or at least without expressing the argument for research in these terms). I believe that in recent years, and in step with an increased utilitarianism in society and in research funding agencies in particular, research has come to justify itself more and more in terms of utility, and less and less in terms of pursuit of truth or knowledge for its own sake. I cannot elaborate on this here, but I do believe it to be a matter of tremendous sig nificance that has not been sufficiently noted and discussed.
34 A number of recent documents address these larger governance and accountability issues that go beyond research ethics as such to incorporate broader concerns about democracy. For example, see Stacey (2001), GeneWatch (2002), the Québec Commission de l’éthique de la science et de la technologie (2003).
35 Whether there exists a right to privacy makes a great deal of difference as concerns how we adjudicate issues where privacy is in conflict with some other good. It is the generic status of privacy as a moral claim that I wish to emphasize here, which claim has all the more force if privacy is specified (and acknowledged) as a right.
36 However, this does not mean that privacy is merely an individual as opposed to a social good. Individual rights are at the same time social goods. Whatever else the good society may be — and visions of the good are many — it is a society in which the rights of individuals and respect for persons are taken seriously. Consent at the level of the individual, where it is an ingredient part of respect for persons, is a microcosm of consent of the governed at the societal level. For a theory that elaborates privacy as a social good as opposed to a solely individual right or good, see Regan (1995).
37 Their statement is as relevant today as it was 40 years ago, and has as much relevance for biobank research as it does for behaviourial research.
38 A typical example of confidentiality defined along these lines: “the characteristic of data and information being disclosed only to authorized persons, entities and processes at authorized times and in the authorized manner” (Council of Standards of Australia, 1995, p. 7). The repetition of “authorize” three times in this definition might appear reassuring but should not distract us from noting that it is not the individual whose “confidential” data is in question that is doing this “authorizing.”
39 This sense of “violation” is captured in a survey on privacy and genetic violation conducted by Geneforum (2001). In asking respondents to rate the seriousness of genetic violations, it gives as an example of a violation “deliberately keeping tissue without informed consent” (p. 6). Using a seriousness of crime scale, respondents assessed the seriousness of this violation on par with “stealing two diamond rings worth approximately $5 000 from a small jewelry store” (p. 8). Deliberately releasing information about identity was judged about as serious as “wounding someone with a blunt instrume nt such that they required hospitalization” (p. 8).
40 However, to the extent that harms or risks (and not just consent) are also at issue, there are other reasons to care, and the more so the more seriously interests may be adversely affected by the loss of privacy. Robertson elaborates: “At the same time, it is worse if the secret voyeur also takes photos and discloses them to others who then inform the person photographed, or if the person with unauthorized access to private information uses it to deny a person a job or life insurance. Privacy is violated in either case, but in the latter situation, the violation also harms the person consequentially” (p. 64).
41 This question is all the more important given that the best regime we could honestly and realistically hope for would fall short of this certainty by some debatable number of degrees The impossibility of reducing risks to zero adds to the argument for consent on the grounds that persons ought to be able to decide for themselves whether they want to accept those risks.
42 It is much the same point that is missed when the expression of privacy concerns about access is met with the rhetorical question, “If you have nothing to hide, what are you worried about?” One may or may not have something to hide, or something to worry about, but that is quite beside the point if one’s concern is with the imposition itself, and not adverse consequences arising from it.
43 The same point could also be made in connection with the notion of “abuse” or “misuse” of information as expressed in various data protection norms (e.g., “ . . . prevent misuses of data”; Code of Fair Information Practices, 1973). What counts as a “use” or “abuse” is determined relative to what has been “authorized”; “misuse” or “abuse” equals “unauthorized.” On this account, there is no “misuse” or “abuse” provided the data is used as authorized, even if this authorizing has been done by someone other than the individual whose data are in question.
44 It is important to note that, throughout, I am analyzing issues as they arise under the rubric of privacy, and consent issues as they can be construed as related to privacy. However, consent has value and links with autonomy independent of any connection with privacy. Therefore, even when consent does not register in terms of privacy (e.g., security privacy is under consideration) it may nonetheless register independently in relation to autonomy (which I believe is the primary reason it receives as much attention and care as it does in the research biobanking literature, and not for its links with privacy).
45 Some such perspective, however inadequately I have been able to describe it, is widely held among policy makers, custodians of various databases or repositories, researchers and others (for whom the information in question is seen primarily as a resource for purposes believed to be benign). This perspective can be discerned in the literature on health research but is especially prevalent in policy statements and pronouncements. I provide a few examples in what follows to illustrate this perspective, but I appreciate that this does not count as evidence for its pervasiveness.
46 Naser and Curtis (2000, pp. 39–40) elaborate these broad values perspectives in relation to consent requirements and exemptions in U.S. federal regulations. Their elaboration of the deontological perspective in research ethics (pp. 37–48) is especially good. Ess (2000, addendum pp. 2, 21–22) discusses “the contrast between utilitarian and deontological approaches as reflected in contrasts between U.S. and European laws regarding privacy and consumer protection.”
47 They credit this distinction to Brewster Smith.
48 The idea of public good purposes is highly problematic. Some writers on privacy, such as Etzioni (1997), make muc h of the distinction between business or industry, which is focussed on profit maximization, and public bodies such as governments or research institutions, which are presumably focussed on public good purposes. This distinction becomes increasingly untenable given the emerging infrastructure (funding, partnerships, commercialization, etc.) for research. At any rate, I think it is highly debatable that, as Etzioni supposes, government poses a lesser threat to privacy than does the private sector.
49 However, because Lowrance does not make his values orientation explicit, the discernment of this perspective requires careful interpretation. Lowrance was appointed to chair the Interim Advisory Group of the UK Biobank in Feb. 2003.
50 The Romanow Report’s (Commission on the Future of Health Care) is an interesting example of such confusion. For a brief discussion of this report, see Yeo, 2003.
51 The Privacy Commissioner’s statements are all that much more remarkable because in the very same speech he professes to adhere to a more traditional, self-determination and indeed rights-based definition of privacy: “I define privacy as the right to control access to one’s person and to information about oneself. And nowhere is that fundamental human right, that innate human need, the right of privacy, more important than with regard to personal health information — information about the state of our own bodies and minds.” That even someone in the position of Privacy Commissioner (whom one might presume should know better) can confuse the privacy issue this way underscores the importance of being vigilant about the distinction I am making.
52 Given the potential for confusion here, I want to ensure that my point is not misconstrued. I am not criticizing Radwanski for subordinating privacy (as self -determination) to public good beneficence. Which value should prevail in the case of research access to health records is a value judgment, and one that should be subject to public debate. My point rather is that Radwanski effectively obfuscates that there is a value judgment here — de facto makes it in favour of access, while at the same time suppressing the fact that he is making a value judgment. He conceals the negation of privacy by using language confusedly. He thereby preempts public debate about this because there is no issue to debate!!
53 Along these lines, Rolleston’s (2002, p. 27) suggestion that the best practices approach he favours should be led by CIHR is additional reason for concern, given CIHR’s understandable prejudice in favour of research access. This is not a criticism of CIHR, any more than pointing out that a particular individual would be in a conflict of interest were he or she to take on a certain role would be a criticism of that individual. It is the conflict situation itself that is the problem, and not the person or the institution as such.
54 There is a vast literature on this subject. Graham (2003) has a discussion of this subject in the Canadian context and reviews some of the literature.
55 Moreover, in most cases researchers have no interest in “me” except as a data source. Their interest rather is in a “virtual” me; a data subject constructed from various pieces of information that issue from me. They have no need to link this data subject to me, except insofar as this may be necessary to link various pieces of data to the same data subject, and acquire new information from or about me in the future. Even here the research interest in me as a data source is quite different from the interest of law enforcement, for example, which has an interest in me of an altogether different kind.
It is possible to completely sever the link between the data subject and me, in which case it is, in principle at least, impossible to identify me from the data subject. However, there are two main reasons why this may not be desirable. The first is that the data subject at any given point in time may not be sufficiently complete for the research interest (e.g., longitudinal research). If I am needed as an ongoing source of data, there must be some way for someone (although it does not have to be the researcher) to link the ongoing me to the data subject. The second is that some reason may emerge to contact me, for example, to inform me about some medical discovery that has been made in which I have an interest.
56 Arnason (2002) has a comprehensive discussion of conceptual and technical issues regarding purported anonymity in the Health Sector Database in Iceland with respect to key-coded data and so-called “one -way encryption.” He argues that as long as a key exists (however guarded and encrypted that key may be) to link new information to old information on individuals, the information is not, as claimed, anonymous. Ross Anderson (1999), perhaps one of the world’s foremost health sector computer security experts, comes to much the same conclusion in a scathing critique of the contemplated security provisions for the Health Sector Database. He finds other even more significant faults with its security provisions, including those linked to governance.
57 The sufficiency of consent may also be questioned from the standpoint of considerations other than non-maleficence. There may be broader societal or community interests at stake, and impacts of research beyond that on privacy. Certain arrangements concerning such things as patents and property may not be acceptable, independent of whether subjects would be willing to agree to them or would potentially be harmed by them.
58 I expect that Caufield, Upshur and Daar (2003) are correct in interpreting the legal doctrine of informed consent as narrowly as they do. They are certainly to be commended for facing squarely the tension between consent and research. However, when I speak of informed consent I have in mind the doctrine insofar as it is grounded in respect for persons and selfdetermination. The legal doctrine may or may not be the most perfect way to express consent thus grounded. In any event, their authorization model is an interesting model for consent, provided it is interpreted and implemented in such a way as to truly give expression to respect for persons and not to make an end run around it to ensure access to desired data.
59 The philosophical (or moral) issue here cannot be settled by public opinion polls. The conclusion of a recent survey by Geneforum (2001, p. 5) on attitudes to broad (blanket) consent are nonetheless interesting, even if predictable: “There is a sharp division of attitudes regarding this form of consent. About half prefer blanket informed consent and half prefer to consent to each specific use of DNA samples separately.” Of course, we do not need a survey to tell us that people have different decision styles and preferences. Some prefer to know a great deal, and others are content to have the broad outline. The difference in individual preferences argues in favour of disclosure models tailored as much as possible to individual differences. One way to accomplish this is with layered consents (whatever minimal elements it is deemed necessary to include in the first layer), allowing those who prefer to penetrate deeper into details the opportunity to do so. The values of consent, transparency and accountability are in synergy here since, if the data steward is committed to transparency and accountability, a great deal of detailed information about the research project or database (policies, partnerships, and even financial statements and annual reports) should be available to be researched by anyone who wishes to do so.
60 However, there may be other reasons for concern about broad consent beside ones having to do with self-determination (e.g., protection of the subject from harm).
61 It can be useful to distinguish consent from choice. Choice pertains to the range of options I have available to me. Alpert (2000, p. A-30) proposes a graduated consent, with a spectrum of options as concerns both subsequent use the degree of identifiability/anonymity. This seems to me a good idea. However, more choices do not necessarily mean that autonomy is enhanced, and multiplying choices does not necessarily improve the consent.
62 I believe that the much debated consent for the National Health and Nutrition Examination Survey (NHANES III), which created a database containing blood samples from 8 500 persons combined with survey information and other population health information, was not specific enough to authorize the newly contemplated research purpose. The relevant item from the consent form reads: “A small sample of your blood will be kept in long-term storage for future testing” (Weir, 2000, p. F-4). The problem is that testing and research are quite heterogeneous purposes that, to a lay person at least, cannot be bridged by the notion of consistent use. This example also illustrates the potential for the conflating of clinical purposes (what may happen to my information or samples for my benefit) and research purposes (what may happen to my information for reasons that may have nothing to do with the provision of care to me). Clinical and research purposes sometimes coincide (in which case, different issues arise); when they do not, they are radically heterogeneous.
63 Pomfret and Nelson (2000) report on a collection project in China undertaken by Harvard researchers in which local inhabitants were enticed to provide blood samples by the promise of free medical care, which they could not otherwise access. To the extent their receipt of health care was tied to their “consent” to provide blood samples, this consent is of dubious legitimacy.
64 See Freedman (1975) for a discussion of this issue. Fair information principles furnish a quite different framework for addressing the issue of bundling and tied consent. Such practices come under the scrutiny of a collection or purpose limitation principle. In PIPEDA (2001), the privacy protecting virtues of this principle are enhanced by the addition of a “reasonable purpose” test.
65 The relationship between liberal theory and the tradition of medical confidentiality is discussed at greater length in Yeo and Brooks (2003).
66 This is the justificatory test research is required to meet under PIPEDA (Canada, 2000, para. 7(2)(c) and 7(3)(f)) to access and use information without consent. The relevant parts of PIPEDA speak of “collection” and “disclosure” rather than access, but this latter term is less awkward for present purposes.
67 Nor does the CIHR document include a test for what counts as “vitally important research.” Moreover, as this qualifying phrase does not appear in the recommended regulations, if these recommendations were adopted, research would not need to be found “vitally important” to meet the “impracticability” test, satisfaction of which is sufficient (for CIHR) to justify access to personal information without consent.
68 To be sure, the CIHR recommendations (see Recommendation 3) do reference a requirement for review by a research ethics board. However, this requirement does little to remove the concern I raise here.
69 The Canadian Medical Association’s Health Information Privacy Code (1998) recognizes the importance of a principled justification, not only out of respect for rights and democracy, but also in consideration of the importance of professional confidentiality. It advances a test for nonconsensual collection, use and access that incorporates the Section 1 test in the Canadian Charter of Rights and Freedoms (1982), as well as principles of professional ethics. Sections 3.5 to 3.7 of the Code are particularly noteworthy and contain provisions that are considerably more rigorous and respectful of privacy than those typically brought to bear in research. Section 3.6.b, in addition to, or building on, other provisions (e.g., for privacy impact assessments, etc.), makes the following additional specifications, inspired by the Charter, to ensure proper scrutiny of the issue of consent exceptions:

When non-consensual collection, use, disclosure or access is permitted or required by legislation or regulation that meets the requirements of this Code, the following conditions must also be met:
  1. the right of privacy has to be violated because the purpose(s) could not be met adequately if patient consent is required; and
  2. the importance of the purpose(s) must be demonstrated to justify the infringement of the patient’s right of privacy in a free and democratic society.
70

To be fair, Rolleston also notes current initiatives under way to improve the system and expresses confidence that they can do so sufficiently that significant change is not necessary.
71 Lowrance (2003, p. 61) writes that “many leaders of health and research institutions are simply not aware of all the database activities in their domains, in part because databases tend to ‘just grow’ from informal beginnings and gradually expand their purposes.”
72 In their study of the regulation of biotechnology in Canada, Boucher et al. (2002, p. 16) point to the importance of ensuring attention not only to protocols but also to development and implementation since what is actually built may be different in ethically significant ways from what is contemplated in outline. This is likely to be the case with large-scale biobanks. The inability of ethics review to ensure oversight and monitoring is also an issue.

http://cbac-cccb.ca


    Created: 2005-07-13
Updated: 2005-08-04
Top of Page
Top of Page
Important Notices