Ch. 1: Overview of IT security issues ITSS Legal Issues Working Group 9/17/96 1-22 Chapter 1 Overview of Technology, Security, Privacy and the Law A. Security and information technologies: general comments 3 B. Privacy and information technologies: general comments 14 C. Law and technology: general comments 16 This is the report of the Information Technology Security Strategy (ITSS) Legal Issues Working Group.1 The Legal Issues Working Group is an inter-departmental group, although the large majority of members are lawyers with the Department of Justice. The report is a collaborative effort, with many individual contributing various portions of the report. Our mandate is to canvass the legal issues relating to the Information Technology Security Strategy and to suggest legislative options for addressing legal problems with implementing new information technologies. The full mandates of the Working Group and the ITSS Steering Committee are provided in Annex A. The IT Security Strategy and the Blueprint for Information Technologies The mandate of the Legal Issues Working Group comes from the government’s Information Technology Security Strategy. In a nutshell, the strategy is to find ways for the federal government to have in place the most appropriate secure technologies at the lowest possible price. The strategy can be achieved if the government receives bulk discounts by all departments buying the same products. It can also be achieved if the government buys the lowest priced technologies which meet its security needs. This means buying products intended for a market larger than the federal government, preferably a market aimed at the entire private sector and other governments. If the needed technologies do not exist, then the strategy calls for the government to minimize its costs of developing the products by entering into partnerships with the private sector. The private sector’s incentive for investing in the project is to be able to resell the newly developed products to other markets. If this strategy is implemented effectively, government will save money and improve efficiency in other ways: if government departments can join together to buy the same or technically compatible products, then there will be interoperability between agencies. If government buys products that are acceptable to other markets, it is very likely that the products will meet international standards for quality and security. This should reduce risks of product failure or unusually quick obsolescence. Finally, successful implementation of this strategy will mean the government will be able to share information across departments to a far larger extent than occurs today, with the promise of both cost-savings and better service for the public. The Information Technology Security Strategy Steering Committee decided to focus on three technologies in particular for implementing electronic security: encryption (including digital signatures and public keys); firewalls and gateways; and advanced card technologies. At the end of this report is a Glossary of these and other technical terms. The IT Security Strategy exists because of the government’s desire to make greater use of information technologies. This direction is set out in the government’s Blueprint for Renewing Government Services using Information Technology. The Blueprint says that to the greatest extent possible, the government will: · capture its information electronically, once-only, and share it on a need-to-know basis; · deliver its services to citizens through a “single window” free of functional and organizational barriers; · use the Internet and joint ventures to communicate information with citizens, with other levels of government and with the private sector; · conduct its transactions electronically (e.g., tenders, contract awards and payments; tax returns and refunds; grant and license applications and approvals; personnel competitions and benefits; delegations of authority; etc.). The federal government is extremely active in developing policies relating to electronic information. The government is truly in the information business, and virtually every federal department plays a major role in the use of technology, the collection of information or the development of information standards and policies. For a quick overview of some of the many inter-departmental committees and working groups examining information policies, information laws and information technologies, see the charts in Annex B. Because no electronic system can be perfectly secure and because electronic information is relatively easy to copy, alter, destroy or corrupt, moving towards greater use of electronic information appears to increase security risks. This increased risk is justified by referring to other objectives such as better and more cost-effective services. The IT Security Strategy is an attempt to keep the new security risks to a minimum. A. Security and information technologies: general comments It is important to note that the Government Security Policy does not (and cannot be expected to) answer all questions with respect to information technologies. The Government Security Policy focuses on the security of information. It does not examine questions such as the long-term or broad impacts of technology on the national interest, it does not analyze the various kinds and sources of threats to the security of information, it does not define the ‘national interest.’ On the issue of the security of information, the Security Policy’s focus is on how to ensure information is kept confidential,2 although the policy also makes reference to the importance of accuracy and integrity of information. A recent survey of Chief Information Officers reported that data integrity is the most important security objective and data availability is likely to increase in importance as a security objective in the future.3 Problems such as long-term preservation of electronic information and the need to ensure that published government information is accurate are given much less emphasis than securing confidential information. In addition, on some important issues such as how to monitor information security and what to do in case of breaches of security, the policy requires departments to develop their own policies. There is no government-wide approach to these fundamental issues. It is important to note that reasonable people can disagree on the extent of risk and the types of security that is required for different kinds of information. Risk assessment is a fundamental and inherently subjective part of securing information. A number of factors must be considered. The Security Policy does not provide guidance on which of the following factors should be given more weight than other factors: · cost of the proposed security vs. cost of not having security · how often a particular kind of breach of security actually occurs · whether a breach causes demonstrable harm · the dollar value of the damage caused by a particular kind of breach · the non-monetary value of the damage (what value does one assign to the unauthorized disclosure of personal information, to the national interest, to the loss of customer or public confidence?) · difficulty in detecting certain kinds of breaches · difficulty of preventing certain kinds of breaches · potential for catastrophic harm with remote chances of occurring compared to potential for lesser harm with greater probability of occurring In conducting risk assessments and making choices about what security levels and technological safeguards to assign to certain types of information, it is worth remembering that experts and non-experts may have legitimate differences of opinions.4 There is a risk that focusing too much on keeping information confidential, as the most important focus of government security, will result in failing to identify or devote appropriate attention to other security issues raised by new technologies. Main threats to the security of electronic information There are a variety of different kinds of threats to the security of electronic information. It is important to be aware of the various kinds and sources of the threats. Again, there is a risk that focusing too much on some threats, such as computer hackers and disgruntled employees, without appropriate attention given to other threats, such as the problem of preserving electronic information over the long-term. Below are general comments on four categories of security threats. The nature of the technology The nature of electronic information is that anyone with access to it can delete or alter the information, in ways that are undetectable, with the touch of a button. Just as easily, a person can “create” an electronic document that is communicated but never retained. Electronic records can be lost even if they are initially saved, by deleting them to create more disk space and by giving them file names only the originator knows. The documents may be filed on personal computer hard drives or on diskettes, located in a labyrinth of directories, and may even require knowledge of passwords before they can be opened. Finding records can be an important security issue. Recently, an Ontario Hydro nuclear power plant near Toronto needed to find a record of a crucial reactor sealing ring that had suddenly begun wearing out several years earlier than expected. To obtain some replacement rings fast, the plant’s managers and scientists had to know immediately who sold them the parts, when they had been ordered, and whether the contractor provided guarantees against defects. ... as the records manager and her staff searched frantically for the stray document, she discovered that the problem was now a chronic one. Despite management directives that all employees print out paper versions of electronic documents and place them on file, the volume of paper records arriving at the central storage office had dropped 50 percent within six months of the network’s installation.5 More importantly, computers can do alter and delete information with no human intervention. “It is only slightly facetious to say that digital information lasts forever — or five years, whichever comes first.”6 Information technologies rapidly become obsolete. [F]uture access depends on an unbroken chain of such migrations from one technology to another frequent enough to prevent media from becoming physically unreadable or obsolete before they are copied. A single break in this chain renders digital information inaccessible, short of heroic effort. ... [When a migration, or translation, occurs] not only does translation lose information, it also makes it impossible to determine what information has been lost, because the original is discarded.7 A good example of how these concerns translate to current government realities is the Ontario Ministry of Consumer and Commercial Relations. This department is responsible for receiving documents and issuing birth, death and marriage certificates, change of name documents and other key life events. In 1991, the Ministry made optical images of approximately 10 million records, and has added another 4 million since then. Ontario passed new laws permitting the use of optical images, certified copies and electronic filing of information. Yet the Ministry chose to retain all of its original paper documents. The Ministry is now preparing for its first migration: converting all of its optical images to a new technology, probably in 1997, less than a decade later. It has not yet selected the next technology. The Ministry has not yet decided whether or how it can migrate to a paperless environment, although its Manager of Information Systems believes this will happen over the next decade.8 Different options have been considered for the problem of preserving electronic information over the long term, but none are perfectly satisfactory. The essential problem is that going paperless means ensuring future computers have access to the same instructions telling a computer how to read an otherwise unintelligible string of 0s and 1s. Generally, those instructions are themselves a series of 0s and 1s. “To be readable for posterity, these specifications must be saved in a digital form independent of any particular software, to prevent having to emulate one system to read the specifications of another.”9 Not only do programs become obsolete, the physical properties of the technology are subject to erosion over time. Some experts believe digital magnetic tapes should be copied once a year to guarantee that information is not lost. 10 Magnetic tapes, floppy disks and hard disks — made by gluing thin layers of magnetic metal oxides to plastic surfaces — can deteriorate when the plastic shrinks or expands, the adhesive degenerates or the magnetic particles are disrupted and lose their precise orientation. An optical disk such as a CD ... can come apart or corrode if moisture gets between the layers.11 The National Archives and Records Administration in Washington, D.C. reports that magnetic tapes containing data received from government departments were unreadable after 15 years, in part because departments do not follow archival conservation standards relating to temperature and humidity of storage, and rewinding or copying tapes regularly. Old tapes become brittle, while new tape drives spin the tape 10 times faster than earlier models. Perhaps the most spectacular example of a government agency losing its electronic memory recently occurred at the National Aeronautics and Space Administration when space scientists were eager to access some 1.2 million magnetic tapes of observations that NASA created during three decades of space flight. The researchers were hoping to reveal ‘long- term trends like global climate change, tropical deforestation, and the thinning of the ozone layer,’ according to NASA, as well as new nuggets of information about the moon and planets. But the information could not be read or sometimes even found. Tapes were uncatalogued. Some had been damaged by heat or floods. Many were unlabeled as to which mission or spacecraft or computer system created them. Because no proper archival controls for these records were in place, NASA officials estimate that it will take millions of dollars and years of detective work to link each of the files to their spacecraft or mission and the decode the information so it can be read by hardware and software now in use.12 Another way computers can alter or delete information on their own is through malfunction due to the complexity of computer programs. It is inherent in computer programming that there will be ‘bugs,’ unintended program defects whose presence is very difficult to detect and whose operation can be entirely unpredictable. This is one of the reasons software manufacturers refuse to guarantee that their products actually work. Virtually all software comes with a disclaimer having the same meaning as this one, by Haventree Software Company about their EasyFlow program: If EasyFlow doesn’t work: tough. If you lose millions because EasyFlow messes up, it’s you that out the millions, not us. If you don’t like this disclaimer: tough. We reserve the right to do the absolute minimum provided by the law, up to and including nothing. This is basically the same disclaimer that comes with all software packages, but ours is written in plain English and theirs is in legalese. We didn’t want to include any disclaimer at all, but our lawyers insisted. For one example of how bugs affect computers, on July 1, 1991, the national capital of the U.S.A. was deprived of phone service. The cause was a single mistyped letter among ten million lines of computer programming code.13 Kevin Kelly, Executive Editor of Wired magazine, predicts that one-hundred-million line programs are not far away and that the future of computing is to move away from the current “serial” programming, where computers perform commands in sequence, to “parallel” programming, where computers perform commands simultaneously. “But parallel computers remain hard to manage. Parallel software is a tangled web of horizontal, simultaneous causes. You can’t check such non- linearity for flaws since it’s all hidden corners.”14 Charles Perrow, an expert at analyzing organizational systems, argues that at a certain point, some systems become so complex and inter- connected, that it is inevitable they will break down. Because these breakdowns are inevitable and cannot be prevented, he calls them “normal” accidents.15 Daniel Crevier, who wrote a history of the development of artificial intelligence, says “In large artificial systems, stability problems are the norm, rather than the exception.”16 Failure of organizations to take appropriate security measures Another serious threat to the security of electronic information is the failure of organizations to take appropriate security measures. This could be interpreted as a failure to comply with legal obligations to secure electronic information. In his 1990 Annual Report, the Auditor General found serious problems with the government’s approach to information security. Good security costs money. ... management is exposing sensitive information to an unacceptable level of risk. ... departments and agencies have been negligent in not satisfying this need, and in failing to make an adequate commitment to threat and risk assessment. ... departments and agencies have not made a concerted effort to implement the policy and standards, with respect to information security. ... Deputy heads ... have not assigned the necessary priority to information security.17 The government responded to this report by undertaking a number of important steps, including a revision of the Security Policy to address important questions raised by the Auditor General, including, among things, the division of security responsibilities within government. The Auditor General’s follow- up report in 1994 recognized the improvement but found that “much remains to be done on periodic security activities that are more resource-intensive, such as formal security training and reviews and comprehensive threat and risk assessments.”18 Examples of security errors that are easy to make include providing access to information systems as a whole rather than limiting access to specific digital information on a need-to-know basis (Security Policy, Ch. 2-1, p. 37); not storing records of electronic mail that provide a record of the management and administration of a government institution (Treasury Board Access to Information Manual, Ch. 2-4, p. 11); not making back-up copies of digital documents; choosing short, easy to discover passwords; leaving computers on unattended; installing modems that by-pass the security features on the central computers; and in some cases, not deleting information from disks that will be reused;19 failing to respect the Employee Privacy Code (an important consideration when auditing computer use: Treasury Board Privacy and Data Protection Manual, Ch. 3-3). Threats to security caused by authorized users A third threat to the security of electronic information comes from authorized users. The Director of Education of the U.S. National Computer Security Association says “Our data shows that 85 per cent of computer crime is committed by employees inside companies who are stealing money and data from their own employers.”20 Authorized users have easy access to the system, detailed knowledge of how the system operates, and what information is available on the system. They may have direct access to sensitive information as part of their jobs. They may feel justified in breaching information security policies for any number of reasons. Authorized users can threaten the security of electronic information in three ways: · by deliberately not following policies (choosing simple passwords; not making back-up copies of documents; not filing electronic mail messages; sending highly sensitive information over the unsecure networks; etc); · by accidental errors (inadvertent changes to or deletions of “original” electronic documents, using file names that effectively prevent electronic documents from being located at a later time, inadvertently disclosing sensitive information when disclosing non-sensitive information, etc.); and · by intentional sabotage of information or information systems (particularly if the user is disgruntled by decisions of the organization such as dismissing the employee, failing to promote the employee, or if the employee objects on moral or political grounds to a government decision, etc.). The best way to prevent authorized users from being security risks is by providing a work environment that induces loyalty and compliance with policies. As job security, promotion and salary increases are reduced, this is increasingly difficult to do. Further, as empowerment and technical expertise increases, authorized users are much better able to affect the security of electronic information. Other ways to prevent security breaches by unauthorized users is through background checks when hiring, by providing adequate computer and security training, and by monitoring the use of computer systems. No matter how much effort goes into such measures, it is inevitable that there will be security breaches. Charles Perrow comments that problems caused by humans generally do not result in serious damage, but as the technologies become more complex and the consequences more serious, there is more cause for concern: Time and time again warnings are ignored, unnecessary risks taken, sloppy work done, deception and downright lying practiced. As an organizational theorist I am reasonably unshaken by this; it occurs in all organizations, and it is a part of the human condition. But when it comes to systems with radioactive, toxic or explosive materials, or those operating in an unforgiving, hostile environment in the air, at sea, or under the ground, these routine sins of organizations have very nonroutine consequences. Our ability to organize does not match the inherent hazards of some of our organized activities. Better organization will always help any endeavour. But the best is not good enough for some that we have decided to pursue. Nor can better technology always do the job.21 The threat of unauthorized users Unauthorized users of government computer systems are a real threat, although computer hackers probably pose the least serious of the various security risks.22 Here is a conversation between author Bruce Sterling and Gail Thackeray, former Assistant Attorney General in Arizona and a leading expert in electronic crime. What, in her expert opinion, are the worst forms of electronic crime, I ask, consulting my notes. Is it credit card fraud? Breaking into ATM bank machines? Phone-phreaking ex.: manipulating the telephone system to gain free telephone service? Computer intrusions? Software viruses? Access-code theft? Records tampering? Software piracy? Pornographic bulletin boards? Satellite TV piracy? Theft of cable service? It’s a long list. By the time I reach the end of it I feel rather depressed. ‘Oh no,’ says Gail Thackeray ... ‘the biggest damage is telephone fraud. Fake sweepstakes, fake charities. Boiler- room con operations. You could pay off the national debt with what these guys steal.’23 This conversation shows that unauthorized users pose a rather small threat to the security of electronic information. Among the most serious of the concerns created by unauthorized users are computer viruses. However, the editor of the Virus Bulletin says that although the number of viruses increased from 22 to approximately 5,000 from 1986 to 1993, they pose less of a threat now than they did in the 1980s because of awareness of their existence and software and other technical ways to protect from viruses.24 Nonetheless, as government computers are increasingly linked to public networks, and especially as electronic commerce increases, the opportunity and motive for unauthorized users to intercept and alter government information is increasing.25 “Computer security experts guess that over a million passwords were stolen in the first six months of 1994 alone.”26 As governments begin to connect their computers to the Internet, to other departments, to other levels of government and to their citizens, it is inevitable that security risks presented by unauthorized users will increase. Information technologies and national security: broader issues It is worth pausing, if only briefly, to consider the larger relationships between information technologies and national security. It is appropriate to do so in a Legal Issues report because one of the main effects of new technologies is to limit the ability of national government to independently decide or enforce its laws. Here is a brief discussion of some of those broader issues. Territorial Sovereignty: controlling information entering and leaving the country; investigating offences that cross international boundaries The threat to territorial sovereignty is caused by the movement of information instantaneously anywhere around the world, which means that governments can no longer control what information comes into or leaves a country, and frequently means that investigating offences and enforcing laws are made many times more difficult as a result of territorial jurisdiction problems.27 We know that new technologies are substantially reducing our ability to regulate broadcasting in our country, substantially reducing our ability to impose effective trial publication bans, 28 and reducing our ability to control hate propaganda, pornography or to control unauthorized computer hacking and money laundering. The Internet has been described as “the largest functioning anarchy in the world.”29. These problems are likely to escalate as money becomes more digitized and as more and more of people, corporations and government computers join the Internet. A different kind of threat to territorial sovereignty and the economic security of national governments is world-wide information and currency flows. Countries can be put into crisis over night, as recently happened in Mexico. “Unlike citizens, global investors can call a vote of confidence at any time.”30 We are now getting used to the idea that so long as we rely on foreign money to finance our governments, our decision-making powers are sharply limited by the views of foreign investors. Anonymous transactions: tax collection and law enforcement An even more difficult problem than territorial sovereignty are the problems posed by anonymous transactions. One such problem is that anonymous transactions, combined with world-wide banking, could make it far easier for citizens to avoid reporting their income and avoid paying taxes.31 A different problem is that law enforcement agencies may be limited in their abilities to eavesdrop on suspected criminals and authorities and legal disputants of all kinds will have greater difficulty collecting evidence (see the discussion above about the problems in preserving and finding electronic information). Anonymous transactions are made possible by a variety of technologies. Encryption is becoming widely available: it easy and free to download encryption programs off the Internet. Tim May, a retired Intel physicist, is part of a group of private persons with a deep interest in and understanding of cryptography. May published a warning on the Internet of the “specter of crypto anarchy” and he summarized Crypto Anarchy as having the following features: “encryption, digital money, anonymous networks, digital pseudonyms, zero knowledge, reputations [another way of describing a Public Key Infrastructure], information markets, black markets, collapse of government.”32 “One thing for sure, this stuff nukes tax collection.”33 Money is becoming anonymous. Digital money already exists in the form of cards that can hold credits that diminish as they are used (for example, credits to operate photocopy machines). Payment is made with no record of who made the payment. There may or may not be a record of a digital money card being purchased or replenished. Digital banks now exist, which have the potential to offer the kind of anonymous banking available at Swiss banks to anyone in the world, from the comfort of their own home. Digital money is now being developed for widespread use. Communications are becoming anonymous. “Anonymous remailers” take e-mail messages that are sent to it, strip off the header information (equivalent to the return address on an envelope) and sends the e-mail to their destinations. Companies now offer anonymous telephone services by rerouting telephone calls, allowing you to make calls that cannot be traced, leaving no paper trail whatsoever. Monitoring telephone lines is becoming more difficult with the switch from copper to optical fibres and to digital transmissions. Technology and employment Even more important than national security concerns with territorial sovereignty and anonymous transactions are concerns that technology increases national unemployment, resulting in lower tax revenues, higher social welfare payments and more socio- economic problems. We are long familiar with robots replacing factory workers. We are becoming used to word processors and spreadsheet programs replacing secretaries and clerical staff. We have now entered an era where electronic mail, bulletin boards and conferences, and management information systems, are replacing managers.34 We may be approaching the era when expert systems begin replacing professionals.35 Some persons argue that losing jobs to new technologies is nothing new: what happens is that displaced workers enter new fields. The proof is the transition from an agricultural society to an industrial society, where huge numbers of farm jobs were lost, but workers were generally absorbed into the industrial economy. Now, manufacturing jobs are being absorbed into the service economy. While this may be true, it has also been argued that the movement from manufacturing to service economies has reduced the proportion of full-time, skilled jobs, and reduced wages earned. We are now in what is called by some a ‘jobless recovery.’ At the very least, our society is becoming used to 10% unemployment as a “normal” state of affairs, ( a figure which does not count all those persons who may have given up looking for jobs or may otherwise be unavailable for jobs). To summarize, new technologies carry promises of opportunity and danger. The opportunity is to provide more efficient and improved government services, to create more direct links between citizens and government, and to give citizens maximum freedom of expression and access to information. The dangers are that new technologies will restrict our ability to freely decide our policies, to enforce our laws, and could lead to higher levels of unemployment (although there is no readily apparent alternative). Security of humans from our machines An even more fundamental challenge of new technologies could be their threats to the security of humankind generally. Answering this question requires looking into the future to see where the new technologies are headed. Forecasting the future is an inherently dangerous game and needs to keep in mind an important cautionary note: the promises of new technology miracles are usually oversold in the short-term and under- estimated in the long-term.36 Nonetheless, any serious attempt at strategic planning, policy development or proactive expert advice should be done with an understanding of trends and anticipation of future policy pressures. There are a number of unmistakable trends in technology: · People want to communicate and there is a trend to connect virtually all computers to all other computers is in evidence everywhere we look. The exponential growth of the Internet is all the proof we need (the ubiquitous telephone provides historical proof). · Universal access is highly probable, due to our natural desire to network, combined with the facts that computer prices drop over time, that they consume relatively few natural resources, and that computers produce their greatest economic benefits with the maximum number of persons connected. · Computer memory increases exponentially. Parallel processing is likely to take over from serial processing, enabling vastly more complex computer tasks. Many computer programs now require millions of line of code. · Encryption is becoming a public technology and science. Its use is essential for maintaining privacy and for conducting electronic commerce, so its use will likely spread to all computer users. · Artificial intelligence, genetic engineering, and artificial evolution are in their infancy and growing up quickly, perhaps too quickly to be subject to meaningful legal limits or policy deliberation. Daniel Crevier, a professor at the University of Quebec’s School of Engineering, has written a history of artificial intelligence (AI). In his book, he peers into the future and sees three scenarios: Big Brother (which sees technology, not government, as Big Brother, by threatening privacy and creating wide-spread unemployment); Colossus (which sees technology as exceeding our ability to control it with catastrophic consequences)37 and the Blissful Scenario (which sees computers simplifying complex problems for general understanding, merging computers and human beings with a net benefit as a result). After considering his three scenarios, Crevier concludes: In the longer term, however, AI (artificial intelligence) remains immensely threatening. The machines will eventually excel us in intelligence, and it will become impossible for us to pull the plug on them. (It is already almost impossible: powering off the computers controlling our electronic transmission networks, for instance, would cause statewide blackouts.) Competitive pressures on the businesses making ever more intensive use of AI will compel them to entrust the machines with ever more power. ... proposals already exist for legal recognition of artificial intelligence programs as persons in order to solve the issues of responsibility posed by the use of expert systems. ... The unrelenting progress of AI forces us to ask the inevitable question: Are we creating the next species of intelligent life on earth. ... [The arrival of artificial intelligences] will threaten the very existence of human life as we know it. Whatever the outcome, we will have to radically re-examine our values and ask ourselves such questions as: Is intelligence what humanity is about? Whether it is or not, where do our loyalties belong — to humanity or to evolution? ... It is neither possible nor desirable to outlaw AI. We should not, however, expect the main battles of the twenty-first century to be fought over such issues as the environment, overpopulation or poverty. No, we should expect the fight to be about how we cope with the creations of our own human ingenuity; and the issue, whether we or they — our silicon challengers — control the future of the earth.38 Science writer Stephen Hall, another technology-friendly spirit, rejects arguments that scientific explorations are too dangerous to pursue, as unrealistic and extremist and that such arguments “reflect fear rather than conviction, a contraction of human curiosity and a shrinking of our mission.” Yet even Hall concludes: ... the hope articulated by Lewis Mumford in the 1930s that modern society would be able to benefit from modern technology and be able to minimize its dangers if it was managed and controlled by reform-oriented social scientists and enlightened civic leaders. Yet Mumford gave up on the idea by the 1960s as hopelessly naïve. Half a century later, for better and for worse, with a hole in the ozone layer and a burgeoning greenhouse effect and a war recently fought to defend our right to pour gasoline down a bottomless gullet of an earlier technological invention, it’s clear that society has not quite developed the knack for contesting, much less controlling, the imperial nature with which economics and politics embrace, glorify and enshrine new technologies ...39 The attractiveness of new technologies is that they appeal to the human desire to control our surroundings in search of greater personal security. The paradox is that as our technologies become more complex, they slip further and further out of our control, and are doing serious damage to our surroundings. Jeremy Rifkin, an environmental advocate, argues that we must learn to find a new concept of security (or more appropriately, relearn an old concept): that security comes not from controlling our surroundings, but from developing and appreciating our inter-dependence with our surroundings.40 Rifkin argues against a culture of privacy and urges caution with respect to the use of new technologies. In a paper whose purpose is to explore the government’s information technology security strategy, it is useful to note that there are many security issues relating to the use of new technology. Securing electronic information is only one of them. B. Privacy and Information Technologies: general comments A surveillance society Information technologies have meant that infinitely more information is collected about us. None of us knows how many databases contain information about us or how many people have access to information about us. We know that all of our records are now or will soon be digital. This includes personal information in our medical, tax, banking, and credit card records, credit rating information, what we read, what Internet discussion groups we participate in or simply browse, who we talk to (on the telephone or computer), what kind of car we drive, our birth date, name, marital status, address, family members, how often and when we use our computers at work and what information we store in our computers, etc. The wholesale collection of this much personal information, with the potential for the various parts of this information being collected, combined and shared without our knowledge, could not occur without information technologies.41 Privacy laws and codes around the world try to provide security of personal information. For the federal Privacy Commissioner, this misses the point because it treats privacy as confidentiality: “Lost entirely is the concept of the right to be left alone, from being counted, surveyed, canvassed and monitored at will.”42 The Canadian government is actively exploring “data warehousing” systems, whose objective is to pull existing information from across an organization into a single logical database that can provide profiles on citizens. One use of the data warehouse concept is to detect overpayment and fraud. The magazine Technology in Government recently reported that Australia has introduced a system designed by AT&T. Harry Zimmer, director of marketing for the AT&T Global Information System says the system proposed for Canada will be very similar to Australia’s: it “ ‘will build a citizen database that ... will understand everything about a person or a family, as to all the income that they get from all levels of government.” According to Zimmer, the massive information collection will go well beyond the federal government to provincial authorities as well.”43 The Privacy Commissioner has said: Shared personal databases threaten becoming the single government computer file that privacy laws were enacted to prevent. They pose the threat of a national population database and with it the ominous possibility of a national identification card.44 Whatever privacy protection a specific new technology can offer, the cumulative effect of new technologies is overwhelming in the direction of the erosion of personal privacy. There are good reasons to believe that existing privacy principles and laws cannot ensure protection of personal information, and these problems will only become more severe when government moves to collecting all its information electronically, collecting any particular piece of personal information once-only and sharing that information to all other government institutions who have a need for that information. Chapter 3 summarizes fundamental principles concerning protection of personal information and Chapter 8 summarizes the law concerning government ability to search and seize computers. At this point, it is sufficient to say that existing privacy laws have not prevented the development of what many observers consider to be a surveillance society. Secure technologies both protect and erode privacy New technologies can threaten our privacy in new ways and can be used to provide some privacy enhancements. The government’s Security Strategy is studying the technologies of encryption, firewalls and gateways, and smart cards in particular. Encryption is generally an effective way to keep personal information confidential. However, some persons are concerned about law enforcement access to encrypted messages (these concerns do not raise particularly new issues, the same concerns could be expressed about the law enforcement powers to intercept telephone conversations, which has been well accepted for a long time). Firewalls, gateways and smart cards protect privacy because they help ensure only those persons with a need to know particular information have access to that information. However, these technologies, by their very nature, threaten privacy because they are essentially monitoring technologies. They leave a trail of information about which individuals used which computer systems and accessed which databases at particular times. This trail can interfere with employees’ rights of reasonable expectations of privacy in the workplace (and affect citizens using on-line services in the same way). Smart cards have the added feature of containing personal information directly on the cards. Lost or stolen cards could result in unauthorized, undetected breaches of security. Therefore, there is a trend to making the smart cards as unique as possible: using biometric information that is completely unique to the individual: fingerprints, palm prints, scents, shape of the retina, and, perhaps, DNA. Smart cards based on biometrics raise the possibility of government and employer-held databases of our most personal information. The primary privacy challenges for government’s effort to increase the use of information technologies will likely be: · defining who has control of personal information, particularly if information is held in shared databases (and keeping track of who has accessed the database), · restricting access to personal information where there is a genuine need to know, · preserving personal information (especially if administrative decisions are made based on personal data that is frequently updated: preserving evidence of the data on the day the decision was made must be an important consideration), and · ensuring that audits of government computer systems respect the government’s Employee Privacy Code and employee’s Charter rights to a reasonable expectation of privacy in the workplace, while at the same time ensuring that personal databases are monitored to ensure only authorized persons are accessing them. A related issue could be reasonable expectations of privacy by citizens if and when citizens are able to have direct electronic access to their government-held personal information. C. Law and Technology: general comments Before discussing detailed legal issues concerning the law and new technologies, there are a number of general comments that need to be understood about how the law works generally. · law is flexible. This means that sometimes the law can apply to a wide variety of circumstances. It is not necessary to have a new law for every new technology. Courts have historically demonstrated a willingness to apply old legal principles to new technological circumstances. · law is slow. The law generally reflects established social consensus and is enacted after the novelty of particular new technologies has worn off. · law (like human language) is ambiguous and based on factual situations. This means it is not possible to give guarantees about how the courts will interpret competing arguments, or assess certain kinds of evidence, especially over the long-term. · legal advice is frequently influenced by policy considerations, because the law is flexible and ambiguous, and because of the overriding considerations of the Charter of Rights and Freedoms. For the government, deciding whether or not to defend particular courses of action in court involves both policy and legal considerations. Similarly, all statutory law reflects policy considerations of the government and Parliament. · the constitution limits the ability of the federal government to change the law. The division of powers means that there are some subject matters where the federal government cannot legislate. For example, some persons argue the federal government is limited in the extent to which it can regulate the way private businesses deal with personal information. The federal government is also limited by the Charter of Rights and Freedoms, which provides guarantees of freedom of expression, freedom from unreasonable search and seizure, liberty and security of the person based on principles of fundamental justice, etc; · what the law permits and what makes operational sense are not always the same thing. If you believe you will need to have certain information available to you over the long term, with 100% fidelity to the original, then your decision to go paperless will depend less on what the law permits and more on how confident you are in the technology. · there is no magic to getting courts to accept electronic records into evidence. You simply must convince the courts that the records are reliable. (This issue is discussed in more detail in Chapter 9.) · policies can create legal consequences. A failure to implement or comply with government policies can be used as evidence of negligence in an appropriate case. (This is discussed more in Chapter 5.) Statements made in government policies can create legitimate expectations which can be enforced in some cases. (This is discussed more in Chapter 6.) Having discussed the broad security issues relating to information technology, this report now turns to examining the specific issues relating to · obligations to secure electronic information (Chapter 2 generally, Chapter 3 with respect to personal information and Chapter 4 with respect to commercial information); · potential liability for unauthorized disclosure of confidential information (Chapter 5); · potential liability for inaccurate government information and for having illegal information on government computer systems and bulletin boards (such as harassing, defamatory, obscene messages or messages that infringe copyrights) (Chapter 6); · criminal offences which can be relied upon in the event of misuse of information technology (Chapter 7); · powers of government to monitor, intercept, search and seize electronic communications, information and computers (Chapter 8); · rules concerning the admissibility of electronic records and signatures into evidence before a court (Chapter 9); · specific legal issues relating to digital signatures, confidentiality encryption and a public key infrastructure (Chapter 10); and · issues relating to procurement of secure technologies (Chapter 11). It should be noted that although the first chapters in this report discuss record management issues, the last three chapters are focused squarely on issues relating to financial and commercial transactions. The security of electronic information is increasingly a financial issue, and as economic activity becomes more electronic, potential liability for poor security which causes financial damages increases accordingly. ENDNOTES _______________________________ 1 The group is one of seven working groups established by the Information Technology Security Strategy Steering Committee, which was established by the federal government’s Council for Administrative Renewal. The other six ITSS working groups are: · accountability framework. · advanced card technologies; · confidentiality and privacy; and · electronic authorization and authentication (EAA); · firewalls and gateways; · public (encryption) key infrastructure (PKI); The purpose of the Steering Committee and the seven working groups is to help the federal government implement the Information Technology Security Strategy. 2 The Security Policy defines Information Technology Security as “the protection resulting from an integrated set of safeguards designed to ensure the confidentiality of information electronically transmitted; the integrity of the information and related processes; and the availability of systems and services. (Ch. 1-1, App. C, p. C-8) 3 “Tempting Fate: CIOs continue to downplay data security issues even as the threats rise,” InformationWeek, Oct. 4, 1993. The survey was sent to 6,000 organizations of all different sizes. A total of 870 responses were received, 71% of which were completed by the head of information systems for the organization. 4 Charles Perrow discusses how experts and non-experts give different weights to some risk factors. He cites a study by Decision Research and members of a Clark University group that showed that experts have a more accurate understanding of actual harms caused by particular activities than non-experts. However, the study also found that non-experts assess risks based on the extent to which the risks taken are voluntary, understood and controllable. Thus, even though highway accidents cause more fatalities than nuclear energy, non- experts viewed nuclear energy as posing a greater risk than driving, whereas experts were less likely to weigh the factors this way. This is called the “dread risk factor.” Dread and the unknown, uncontrollable aspects were recognized by the experts, but not thought relevant in judging riskiness. But not so for the public. In fact, for the lay groups, one could predict almost exactly their assessment of risk, based upon their assessment of how much dread was involved in the activity and the likelihood of a mishap being fatal. The ratings of dread and lethality also closely predicted their estimates of the number of fatalities that could be expected in a bad year. But this was not true of the experts. The degree to which they a risk was a “dread risk” and likely to be fatal did not influence their judgment of the overall risk. Apparently, to the experts a death is a death, whether from scuba diving or irradiation. Normal Accidents: Living with High-Risk Technologies, Charles Perrow, Basic Books, 1984, pp. 325-326 5 “It’s 10 O’clock: Do you know where your data are?” Terry Cook, Technology Review, January 1995, p. 49 6 “Ensuring the Longetivity of Digital Documents,” Jeff Rothenberg, Scientific American, January 1995, p. 42 7 “Ensuring the Longetivity of Digital Documents,” Jeff Rothenberg, Scientific American, January 1995, p. 45 8 Based on a telephone conversation with Walter Bilyk, Manager of Information Systems, Office of the Registrar General, Ontario Ministry of Consumer and Commercial Relations, January 28, 1995. 9 “Ensuring the Longetivity of Digital Documents,” Jeff Rothenberg, Scientific American, January 1995, p. 47 10 “Ensuring the Longetivity of Digital Documents,” Jeff Rothenberg, Scientific American, January 1995, p. 45 11 “Preservation of past poses problem,” Mary Gooderham, Globe and Mail, Oct. 24, 1994, pp. A1, A7 12 “It’s 10 O’clock: Do you know where your data are?” Terry Cook, Technology Review, January 1995, p. 50 13 The Hacker Crackdown: Law and Disorder on the Electronic Frontier, Bruce Sterling, New York: Bantam Books, 1992, p. 37 14 Out of Control: The Rise of Neo-Biological Civilization, Kevin Kelly, Addison-Wesley Publishing Co., 1994, p. 308 15 Normal Accidents: Living with High-Risk Technologies, Charles Perrow, Basic Books, 1984 16 AI: The Tumultuous History of the Search for Artificial Intelligence, Daniel Crevier, New York: Basic Books, 1993, p. 317 17 Auditor General Annual Report 1990, p. 236, 232, 235, 240, 242 18 Auditor General Annual Report 1994, vol. 1 pp. 2-8 - 2-9 19 When files are deleted, they are frequently relocated to a “garbage bin” on the computer’s disk, inaccessible through normal programs but retrievable through special software. Until that part of the disk is overwritten, the “deleted” information remains intact. 20 “Sloppy security costs governments, firms $250 billion a year, expert says,” John Davidson, Ottawa Citizen, Sept. 16, 1993 21 Normal Accidents: Living with High-Risk Technologies, Charles Perrow, Basic Books, 1984, pp. 10-11 22 For articles confirming the low rank to be given to computer hackers as security threats, see “Network Disaster Recovery,” Jeffrey Owen, Datapro Reports on Information Security, June 1993, IS35-130-101; “Human Factors in Computer Security,” Jennifer Hallinan, Journal of Law and Information Science, Vol. 4 No. 1, 1993, p. 94; “Tempting Fate: CIOs continue to downplay data security issues even as the threats rise,” InformationWeek, Oct. 4, 1993; “Sloppy security costs governments, firms $250 billion a year, expert says,” John Davidson, Ottawa Citizen, Sept. 16, 1993 23 The Hacker Crackdown: Law and Disorder on the Electronic Frontier, Bruce Sterling, New York: Bantam Books, 1992, p. 170 24 “Computer Virus Survey,” National Security Institute’s Advisory, Dec. 1994, p. 8. Also, see “Tempting Fate: CIOs continue to downplay data security issues even as the threats rise,” InformationWeek, Oct. 4, 1993. Forty-six per cent of respondents said they encountered malicious programs or viruses in that year, but of those, only 10% reported financial or operational losses. Twenty-six per cent said the programs had no effect at all and 64% said they were disruptive but caused no financial loss. 25 For a good article that explains how computer systems can be vulnerable to unauthorized users and how to secure the computer systems, see “Internet Security: Unsafe at Any Node?”, Salvatore Salamone, Data Communications, Sept. 1993, p. 61 26 “Network Confidential”, Joe Flower, New Scientist, October 1994, p. 27 27 For more discussion on the impacts of new technologies on national sovereignty, see The Twilight of Sovereignty: How the Information Revolution is Transforming our World, Walter Wriston (former Chairman of Citicorp), New York: Scribner’s Sons, 1992; and Powershift, Alvin Toffler, Toronto: Bantam Books, 1991 28 Dagenais v. CBC, Supreme Court of Canada, Dec. 8, 1994, p. 43, per Lamer C.J., where The Chief Justice, in a decision about trial publication bans, cautioned that judges should consider whether their rulings are likely to achieve their desired effect and noted that the efficacy of bans has been diminished by new technologies. “In this global electronic age, meaningfully restricting the flow of information is becoming increasingly difficult.” 29 Out of Control: The Rise of Neo-Biological Civilization, Kevin Kelly, Addison-Wesley Publishing Co., 1994, p. 464 30 2020 Vision, Stan Davis and Bill Davidson, New York: Simon and Schuster, 1991, p. 177 31 Joe Flower, “Network Confidential”, New Scientist, October 1994, p. 30: “... it does not take a great leap of imagination to envisage some people trying to form an alternative economy, unrestricted by governments, tax laws or accountants.” 32 Out of Control: The Rise of Neo-Biological Civilization, Kevin Kelly, Addison-Wesley Publishing Co., 1994, p. 205 33 Out of Control: The Rise of Neo-Biological Civilization, Kevin Kelly, Addison-Wesley Publishing Co., 1994, p. 209 34 For more discussion on the effect of new technologies on managers, see “The Twilight of Hierarchy: Speculations on the Global Information Society,” Public Administration Review (January/February 1985), p. 185 35 AI: The Tumultuous History of the Search for Artificial Intelligence, Daniel Crevier, New York: Basic Books, 1993, p. 325 36 AI: The Tumultuous History of the Search for Artificial Intelligence, Daniel Crevier, New York: Basic Books, 1993, p. 7 37 “We are already at a point in the standoff between machine judgment and human judgment where it sometimes takes heroic or even pathological chuztpah to say ‘Well, I know better than the computer.’ And this is a long way before we’ve got intentions really built into computers.” AI: The Tumultuous History of the Search for Artificial Intelligence, Daniel Crevier, New York: Basic Books, 1993, p. 316. Crevier notes that AI departments and laboratories owe their birth and existence to the U.S. Defence Advanced Research Projects Agency (DARPA). DARPA also gave birth to the Internet. Crevier notes that “Some observers have even commented that the computerization of society is but a side effect of the computerization of warfare.” (p. 313) Crevier is not alone in security concerns about the use of new technologies. In its January 1995 edition, Wired magazine, a technology friendly magazine, published a list of ten technological developments that truly cause them concern: terrorism by nuclear bomb, plutonium in a major city’s water supply or electro-magnetic pulse device (paralyzing computer networks); “sandlot” nuclear war (e.g. Pakistan and India) producing global consequences; over-the-counter eugenics and accidents at biotechnology laboratories studying deadly viruses; currency collapse due to a computer virus; cancer caused by electro-magnetic emissions; among others. Out of Control: The Rise of Neo-Biological Civilization, Kevin Kelly, Addison-Wesley Publishing Co., 1994. Even for technology-friendly persons, such as artificial life researcher Chris Langton, the “blissful scenario” of artificial life carries enormous risks. “By the middle of this century, mankind had acquired the power to extinguish life. By the end of the century, he will be able to create it. Of the two, it is hard to say which places the larger burden of responsibilities on our shoulders.” As quoted in Out of Control at p. 351 38 AI: The Tumultuous History of the Search for Artificial Intelligence, Daniel Crevier, New York: Basic Books, 1993, p. 341 39 Mapping the Next Millenium: The Discovery of New Geographies, Stephen Hall, New York: Random House, 1992, p. 401 40 Biosphere Politics: A Cultural Odyssey from the Middle Ages to the New Age, Jeremy Rifkin, San Francisco: Harper Collins, 1992 41 For an excellent description of how personal information is combined, who is combining it and why, see Privacy for Sale: How Computerization has made everyone’s private life an open secret, Jeffrey Rothfeder, New York: Simon and Schuster, 1992. Note also this comment from Kevin Kelly in Out of Control: The Rise of Neo-Biological Civilization, Addison-Wesley Publishing Co., 1994, p. 440: Every fact that can be digitized is. Every measurement of collective human activity that can be ported over a network, is. Every trace of an individual’s life that can be transmuted into a number and sent over a wire, is. This wired planet becomes a torrent of bits circulating in a clear shell of glass fibers, databases, and input devices. ... [I]ndustrial factories mass-produce video cameras, tape recorders, hard disks, text scanners, spreadsheets, modems, and satellite dishes. Each of these is an eye, an ear, or a neuron. Connect together they form a billion-lobed sense organ floating in the clear medium of whizzing digits. See also the article by Phil Patton, “Caught. You used to watch television. Now it watches you.” Phil Patton, Wired, January 1995, p. 125. Phil Patton noted the rapid increase in the number of cameras in our society: “You used to watch television. Now it watches you.” He describes how we are under camera surveillance in most of our workplaces and in virtually all public places. With the advent of videoconferencing, we can anticipate that we will soon have cameras in our homes. He says “World peace relies on video: cameras are watching weapons facilities in Iraq.” In addition, is not just video cameras, but motion detectors, access cards and many other kinds of technologies that monitor who goes where when. Future advances might include software that sends messages back to headquarters, to permit fees to be charged for use of certain information, to permit software developers to provide the ultimate in service: automatic software upgrades; to permit individuals to trace where their personal information is going across the Internet; to trace where pets (and possibly children) are going in case they get lost. The technology of tomorrow could include electronic eavesdropping where the computer filters out everything expect the information sought by the eavesdropper (key words selected by an investigative agency, for example). Crevier suggests “an AI- assisted listener could process four hundred hours of tape in, say, two hours.” AI: The Tumultuous History of the Search for Artificial Intelligence, Daniel Crevier, New York: Basic Books, 1993, p. 321, and see p. 338 re: tracing personal information as it moves through the Internet. 42 Privacy Commissioner Annual Report 1993-94, p. 4 43 “New decision support software will Ottawa stamp out waste and fraud,” Craig Hubbard, Technology in Government, Feb. 1995, p. 13 44 Privacy Commissioner Annual Report 1993-94, p. 12