[Report Cover]

                  [Header all report pages:
              May 30, 1996, Prepublication Copy
           Subject to Further Editorial Correction]



               Cryptography's Role in Securing
                   the Information Society



            Kenneth Dam and Herbert Lin, Editors

       Committee to Study National Cryptography Policy
        Computer Science and Telecommunications Board
Commission on Physical Sciences, Mathematics, and Applications

                  National Research Council


                   National Academy Press
                    Washington, D.C. 1996


____________________________________________________________


NOTICE: The project that is the subject of this report was
approved by the Governing Board of the National Research
Council, whose members are drawn from the councils of the
National Academy of Sciences, the National Academy of
Engineering, and the Institute of Medicine. The members of the
committee responsible for the report were chosen for their
special competences and with regard for appropriate balance.

   This report has been reviewed by a group other than the
authors according to procedures approved by a Report Review
Committee consisting of members of the National Academy of
Sciences, the National Academy of Engineering, and the
Institute of Medicine.


   The National Academy of Sciences is a private, nonprofit,
self-perpetuating society of distinguished scholars engaged in
scientific and engineering research, dedicated to the
furtherance of science and technology and to their use for the
general welfare. Upon the authority of the charter granted to
it by the Congress in 1863, the Academy has a mandate that
requires it to advise the federal government on scientific and
technical matters. Dr. Bruce Alberts is president of the
National Academy of Sciences.

   The National Academy of Engineering was established in
1964, under the charter of the National Academy of Sciences,
as a parallel organization of outstanding engineers. It is
autonomous in its administration and in the selection of its
members, sharing with the National Academy of Sciences the
responsibility for advising the federal government. The
National Academy of Engineering also sponsors engineering
programs aimed at meeting national needs, encourages education
and research, and recognizes the superior achievements of
engineers. Dr. Harold Liebowitz is president of the National
Academy of Engineering.

   The Institute of Medicine was established in 1970 by the
National Academy of Sciences to secure the services of eminent
members of appropriate professions in the examination of
policy maKers pertaining to the health of the public. The
Institute acts under the responsibility given to the National
Academy of Sciences by its congressional charter to be an
adviser to the federal government and, upon its own
initiative, to identify issues of medical care, research, and
education. Dr. Kenneth I. Shine is president of the Institute
of Medicine.

   The National Research Council was organized by the National
Academy of Sciences in 1916 to associate the broad community
of science and technology with the Academy's purposes of
furthering knowledge and advising the federal government.
Functioning in accordance with general policies determined by
the Academy, the Council has become the principal operating
agency of both the National Academy of Sciences and the
National Academy of Engineering in providing services to the
government, the public, and the scientific and engineering
communities. The Council is administered jointly by both
Academies and the Institute of Medicine. Dr. Bruce Alberts and
Dr. Harold Liebowitz are chairman and vice chairman,
respectively, of the National Research Council.

   Support for this project was provided by the Department of
Defense (under contract number DASW01-94-C-0178) and the
National Institute of Standards and Technology (under contract
number 50SBNB4C8089). Any opinions, findings, conclusions, or
recommendations expressed in this material are those of the
authors and do not necessarily reflect the views of the
sponsors.

Library of Congress Catalog Number 96-68943 International
Standard Book Number 0-309-05475-3

Additional copies of this report are available from:

National Academy Press 2101 Constitution Avenue, NW Box 285
Washington, DC 20055 800/624-6242 202/334-3313 (in the
Washington Metropolitan Area)

Copyright 1996 by the National Academy of Sciences. All rights
reserved.

Printed in the United States of America

____________________________________________________________


                     COMMITTEE TO STUDY
                NATIONAL CRYPTOGRAPHY POLICY


KENNETH W. DAM, University of Chicago Law School, Chair
W.Y. SMITH, Institute for Defense Analyses (retired), Vice
   Chair
LEE BOLLINGER, Dartmouth College
ANN CARACRISTI, National Security Agency (retired)
BENJAMIN CIVILETTI, Venable, Baetjer, Howard and Civiletti
COLIN CROOK, Citicorp
SAMUEL H. FULLER, Digital Equipment Corporation
LESLIE H. GELB, Council on Foreign Relations
RONALD GRAHAM, AT&T Bell Laboratories
MARTIN HELLMAN, Stanford University
JULIUS KATZ, Hills & Company
PETER G. NEUMANN, SRI International
RAYMOND OZZIE, Iris Associates
EDWARD SCHMULTS, General Telephone and Electronics (retired)
ELLIOT M. STONE, Massachusetts Health Data Consortium
WILLIS WARE, RAND Corporation


Staff

MARJORY S. BLUMENTHAL, Director
HERBERT S. LIN, Study Director and Senior Staff Officer
JOHN M. GODFREY, Research Associate
FRANK PITTELLI, Consultant to CSTB
GAIL E. PRITCHARD, Project Assistant

____________________________________________________________


COMPUTER SCIENCE AND TELECOMMUNICATIONS BOARD

WILLIAM A. WULF, University of Virginia, Chair
FRANCES E. ALLEN, IBM T.J. Watson Research Center
DAVID CLARK, Massachusetts Institute of Technology
JEFF DOZIER, University of California at Santa Barbara
HENRY FUCHS, University of North Carolina
CHARLES GESCHKE, Adobe Systems Incorporated
JAMES GRAY, Microsoft Corporation
BARBARA GROSZ, Harvard University
JURIS HARTMANIS, Cornell University
DEBORAH A. JOSEPH, University of Wisconsin
BUTLER W. LAMPSON, Microsoft Corporation
BARBARA LISKOV, Massachusetts Institute of Technology
JOHN MAJOR, Motorola
ROBERT L. MARTIN, AT&T Network Systems
DAVID G. MESSERSCHMITT, University of California at Berkeley
WILLIAM PRESS, Harvard University
CHARLES L. SEITZ, Myricom Incorporated
EDWARD SHORTLIFFE, Stanford University School of Medicine
CASIMIR S. SKRZYPCZAK, NYNEX Corporation
LESLIE L. VADASZ, Intel Corporation


MARJORY S. BLUMENTHAL, Director
HERBERT S. LIN, Senior Staff Officer
PAUL D. SEMENZA, Staff Officer
JERRY R. SHEEHAN, Staff Officer
JEAN E. SMITH, Program Associate
JOHN M. GODFREY, Research Associate
LESLIE M. WADE, Research Assistant
GLORIA P. BEMAH, Administrative Assistant
GAIL E. PRITCHARD, Project Assistant

____________________________________________________________


              COMMISSION ON PHYSICAL SCIENCES,
                MATHEMATICS, AND APPLICATIONS

ROBERT J. HERMANN, United Technologies Corporation, Chair
PETER M. BANKS, Environmental Research Institute of Michigan
SYLVIA T. CEYER, Massachusetts Institute of Technology L.
LOUIS HEGEDUS, W.R. Grace and Company (retired)
JOHN E. HOPCROFT, Cornell University
RHONDA J. HUGHES, Bryn Mawr College
SHIRLEY A. JACKSON, U.S. Nuclear Regulatory Commission
KENNETH I. KELLERMANN, National Radio Astronomy Observatory
KEN KENNEDY, Rice University
THOMAS A. PRINCE, California Institute of Technology
JEROME SACKS, National Institute of Statistical Sciences
L.E. SCRIVEN, University of Colorado
LEON T. SILVER, California Institute of Technology
CHARLES P. SLICHTER, University of Illinois at
   Urbana-Champaign
ALVIN W. TRIVELPIECE, Oak Ridge National Laboratory
SHMUEL WINOGRAD, IBM T.J. Watson Research Center
CHARLES A. ZRAKET, MITRE Corporation (retired)


NORMAN METZGER, Executive Director

____________________________________________________________


                           Preface

                        INTRODUCTION


   For most of history, cryptography -- the art and science of
secret writing -- has belonged to governments concerned about
protecting their own secrets and about asserting their
prerogatives for access to information relevant to national
security and public safety. In the United States, cryptography
policy has reflected the U.S. government's needs for effective
cryptographic protection of classified and other sensitive
communications as well as its needs to gather intelligence for
national security purposes, needs that would be damaged by the
widespread use of cryptography. National security concerns
have motivated such actions as development of cryptographic
technologies, development of countermeasures to reverse the
effects of encryption, and control of cryptographic
technologies for export.

   In the last 20 years, a number of developments have brought
about what could be called the popularization of cryptography.
First, some industries -- notably financial services -- have
come to rely on encryption as an enabler of secure electronic
funds transfers. Second, other industries have developed an
interest in encryption for protection of proprietary and other
sensitive information. Third, the broadening use of computers
and computer networks has generalized the demand for
technologies to secure communications down to the level of
individual citizens and assure the privacy and security of
their electronic records and transmissions. Fourth, the
sharply increased use of wireless communications (e.g.,
cellular telephones) has highlighted the greater vulnerability
of such communications to unauthorized intercept as well as
the difficulty of detecting these intercepts.

   As a result, efforts have increased to develop encryption
systems for private sector use and to integrate encryption
with other information technology products. Interest has grown
in the commercial market for cryptographic technologies and
systems incorporating such technologies, and the nation has
witnessed a heightened debate over individual need for and
access to technologies to protect individual privacy.

   Still another consequence of the expectation of widespread
use of encryption is the emergence of law enforcement concerns
that parallel, on a civilian basis, some of the national
security concerns. Law enforcement officials fear that wide
dissemination of effective cryptographic technologies will
impede their efforts to collect information necessary for
pursuing criminal investigations. On the other side, civil
libertarians fear that controls on cryptographic technologies
will give government authorities both in the United States and
abroad unprecedented and unwarranted capabilities for
intrusion into the private lives of citizens.


              CHARGE OF THE COMMITTEE TO STUDY
                NATIONAL CRYPTOGRAPHY POLICY

   At the request of the U.S. Congress in November 1993, the
National Research Council's Computer Science and
Telecommunications Board (CSTB) formed the Committee to Study
National Cryptography Policy. In accordance with its
legislative charge (Box P.1), the committee undertook the
following tasks:

   + Framing the problem. What are the technology trends with
which national cryptography policy must keep pace? What is the
political environment? What are the significant changes in the
post-Cold War environment that call attention to the need for,
and should have an impact on, cryptography policy?

   + Understanding the underlying technology issues and their
expected development and impact on policy over time. What is
and is not possible with current cryptographic (and related)
technologies? How could these capabilities have an impact on
various U.S. interests?

   + Describing current cryptography policy. To the
committee's knowledge, there is no single document, classified
or unclassified, within the U.S. government that fully
describes national cryptography policy.

   + Articulating a framework for thinking about cryptography
policy. The interests affected by national cryptography policy
are multiple, varied, and related: they include personal
liberties and constitutional rights, the maintenance of public
order and national security, technology development, and U.S.
economic competitiveness and markets. At a minimum, policy
makers (and their critics) must understand how these interests
interrelate, although they may decide that one particular
policy configuration better serves the overall national
interest than does another.

   + Identifying a range offeasible policy options. The debate
over cryptography policy has been hampered by an incomplete
analysis and discussion of various policy options -- both
proponents of current policy and of alternative policies are
forced into debating positions in which it is difficult or
impossible to acknowledge that a competing view might have
some merit. This report attempts to discuss fairly the pros
and cons of a number of options.

   + Making recommendations regarding cryptography policy. No
cryptography policy will be stable for all time. That is, it
is unrealistic to imagine that this committee or any set of
policy makers could craft a policy that would not have to
evolve over time as the technological and political milieu
itself changes. Thus, the committee's recommendations are
framed in the context of a transition, from a world
characterized by slowly evolving technology, well-defined
enemies, and unquestioned U.S. technological, economic, and
geopolitical dominance to one characterized by rapidly
evolving technology, fuzzy lines between friend and foe, and
increasing technological, economic, and political
interdependencies between the United States and other nations
of the world.

____________________________________________________________

 BOX P.1 Legislative Charge to the National Research Council

                     Public Law 103-160
       Defense Authorization Bill for fiscal Year 1994
                  Signed November 30, 1993

SEC. 267. COMPREHENSIVE INDEPENDENT STUDY OF NATIONAL
CRYPTOGRAPHY POLICY

   (a)  Study by National Research Council. -- Not later than
90 days after the date of the enactment of this Act, the
Secretary of Defense shall request the National Research
Council of the National Academy of Sciences to conduct a
comprehensive study of cryptographic technologies and national
cryptography policy.

   (b)  Matters To Be Assessed in Study. -- The study shall
        assess

        (1)  the effect of cryptographic technologies on  --

             (A)  national security interests of the United
                  States Government
             (B)  law enforcement interests of the United
                  States Government
             (C)  commercial interests of United States
                  industry; and
             (D)  privacy interests of United States
                  citizens; and

        (2)  the effect on commercial interests of United
             States industry of export controls on
             cryptographic technologies.

   (c)  Interagency Cooperation With Study. -- The Secretary
   of Defense shall direct the National Security Agency, the
   Advanced Research Projects Agency, and other appropriate
   agencies of the Department of Defense to cooperate fully
   with the National Research Council in its activities in
   carrying out the study under this section. The Secretary
   shall request all other appropriate Federal departments and
   agencies to provide similar cooperation to the National
   Research Council.
____________________________________________________________


   Given the diverse applications of cryptography, national
cryptography policy involves a very large number of important
issues. Important to national cryptography policy as well are
issues related to the deployment of a large-scale
infrastructure for cryptography and legislation and
regulations to support the widespread use of cryptography for
authentication and data integrity purposes (i.e., collateral
applications of cryptography), even though these issues have
not taken center stage in the policy debate.

   The committee focused its efforts primarily on issues
related to cryptography for confidentiality, because the
contentious problem that this committee was assembled to
address at the center of the public policy debate relates to
the use of cryptography in confidentiality applications. It
also addressed issues of cryptography policy related to
authentication and data integrity at a relatively high level,
casting its findings and recommendations in these areas in
fairly general terms. However, it notes that detailed
consideration of issues and policy options in these collateral
areas requires additional study at a level of detail and
thoroughness comparable to that of this report.

   In preparing this report, the committee reviewed and
synthesized relevant material from recent reports, took
written and oral testimony from government, industry, and
private individuals, reached out extensively to the affected
stakeholders to solicit input, and met seven times to discuss
the input from these sources as well as the independent
observations and findings of the committee members themselves.
In addition, this study built upon three prior efforts to
examine national cryptography policy: the Association for
Computing Machinery report *Codes, Keys, and Conflicts: Issues
in US. Crypto Policy*,(1) the Office of Technology Assessment
report *Information Security and Privacy in Network
Environments*,(2) and the JASON encryption study.(3) A number
of other examinations of cryptography and/or information
security policy were also important to the committee's
work.(4)

---------

   (1)  Susan Landau et al., *Codes, Keys, and Conflicts:
Issues in U.S. Crypto Policy*, Association for Computing
Machinery Inc., New York, 1994.

   (2)  U.S. Congress, Office of Technology Assessment,
*Information Security and Privacy in Network Environments*,
OTA-TCT-606, U.S. Govemment Printing Office, Washington, D.C.,
September 1994.

   (3)  JASON Program Office, *JASON Encryption/Privacy
Study*, Report JSR-93-520 (unpublished), MITRE Corporation,
Reston, Va., August 18, 1993.

   (4)  These works include *Global Information
Infrastructure*, a joint report by the European Association of
Manufacturers of Business Machines and Information Technology
Industry, the U.S. Information Technology Industry Council,
and the Japan Electronic Industry Development Association
(EUROBIT-ITI-JEIDA), developed for the G-7 Summit on the
Global Information Society, Gll Tripartite Preparatory
Meeting, January 26-27, 1995, Brussels; the U.S. Council for
International Business statement titled "Business Requirements
for Encryption," October 10, 1994, New York; and the
International Chamber of Commerce position paper
"International Encryption Policy," Document No. 373/202 Rev.
and No. 373-30/9 Rev., Paris, undated. Important source
documents can be found in Lance J. Hoffman (ed.), *Building in
Big Brother*, SpringerVerlag, New York, 1995; and in the
cryptography policy source books published annually by the
Electronic Privacy Information Center in Washington, D.C.
____________________________________________________________


                   WHAT THIS REPORT IS NOT

   The subject of national cryptography policy is quite
complex, as it figures importantly in many areas of national
interest. To keep the project manageable within the time,
resources, and expertise available, the committee chose not to
address in detail a number of issues that arose with some
nontrivial frequency during the course of its study.

   + This report is not a comprehensive study of the grand
trade-offs that might be made in other dimensions of national
policy to compensate for changes in cryptography policy. For
example, this report does not address matters such as relaxing
exclusionary rules that govern the court admissibility of
evidence or installing video cameras in every police helmet as
part of a package that also eliminates restrictions on
cryptography, though such packages are in principle possible.
Similarly, it does not address options such as increasing the
budget for counterterrorist operations as a quid pro quo for
relaxations on export controls of cryptography. The report
does provide information that would help to assess the impact
of various approaches to cryptography policy, although how
that impact should be weighed against the impact of policies
related to other areas is outside the scope of this study and
the expertise of the committee assembled for it.

   + This report is not a study on the future of the National
Security Agency (NSA) in the post-Cold War era. A
determination of what missions the NSA should be pursuing
and/or how it should pursue those missions was not in the
committee's charge. The report does touch lightly on
technological trends that affect the ability to undertake the
missions to which cryptography is relevant, but only to the
extent necessary to frame the cryptography issue.

   At the same time, this report does address certain
conditions of the political, social, and technological
environment that will affect the answers that anyone would
formulate to these questions, such as the potential impact on
policy of a world that offers many users the possibilities of
secure communications.

   + This report is not a study of computer and communications
security, although of course cryptography is a key element of
such security. Even the strongest cryptography is not very
useful unless it is part of a secure *system*, and those
responsible for security must be concerned about everything
from the trustworthiness of individuals writing the computer
programs to be used to the physical security of terminals used
to access the system. A report that addressed system
dimensions of computer security was the National Research
Council report *Computers at Risk*,(5) this current study
draws on that report and others to the extent relevant for its
analysis, findings, and conclusions about cryptography policy.

   + This report is not a study of the many patent disputes
that have arisen with respect to national cryptography policy
in the past several years. While such disputes may well be a
sign that the various holders expect cryptography to assume
substantial commercial importance in the next several years,
such disputes are in principle resolvable by the U.S.
Congress, which could simply legislate ownership by eminent
domain or by requiring compulsory licensing. Moreover, since
many of the key patents will expire in any case in the
relatively near future (i.e., before any infrastructure that
uses them becomes widely deployed), the issue will become moot
in any case.

   + This report is not exclusively a study of national policy
associated with the Clipper chip. While the Clipper chip has
received the lion's share of press and notoriety in the past
few years, the issues that this study was chartered to address
go far beyond those associated simply with the Clipper chip.
This study addresses the larger context and picture of which
the Clipper chip is only one part.

----------

   (5)  Computer Science and Telecommunications Board,
National Research Council, *Computers at Risk: Safe Computing
in the Information Age*, National Academy Press, Washington,
D.C., 1991.

____________________________________________________________


               ON SECRECY AND REPORT TIME LINE

             For most of history, the science and technologies
associated with cryptography have been the purview of national
governments and/or heads of state. It is only in the last 25
years that cryptographic expertise has begun to diffuse into
the nongovernment world. Thus, it is not surprising that much
of the basis and rationale underlying national cryptography
policy has been and continues to be highly classified. Indeed,
in a 1982 article. then-Deputy Director of the Central
Intelligence Agency Bobby R. Imnan wrote that

   [o]ne sometimes hears the view that publication should not
   be restrained because "the government has not made its
   case," almost always referring to the absence of specific
   detail for public consumption. This reasoning is circular
   and unreasonable. It stems from a basic attitude that the
   government and its public servants cannot be trusted.
   Specific details about why information must be protected
   are more often than not even more sensitive than the basic
   technical information itself. Publishing examples, reasons
   and associated details would certainly damage the nation's
   interests. Public review and discussion of classified
   information which supports decisions is not feasible or
   workable.(6)

   Secrecy is a two-edged sword for a democratic nation -- on
the one hand, secrecy has a legitimate basis in those
situations in which fundamental national interests are at
stake (e.g., the preservation of American lives during
wartime). Moreover, the history of intelligence reveals many
instances in which the revelation of a secret, whether
intentional or inadvertent, has led to the compromise of an
information source or the loss qf a key battle.(7)

   On the other hand, secrecy has sometimes been used to
stifle public debate and conceal poorly conceived and
ill-informed national policies, and mistrust is therefore
quite common among many responsible critics of government
policy. A common refrain by defenders of policies whose
origins and rationales are secret is that "if you knew what we
knew, you would agree with us." Such a position may be true or
false, but it clearly does not provide much reassurance for
those not privy to those secrets for one very simple reason:
those who fear that government is hiding poorly-conceived
policies behind a wall of secrecy are not likely to trust the
government, yet in the absence of a substantive argument being
called for, the government's claim is essentially a plea for
trust.

   In pursuing this study, the committee has adopted the
position that some secrets are still legitimate in today's
global environment, but that its role is to illuminate as much
as possible without compromising those legitimate interests.
Thus, the committee has tried to act as a surrogate for
well-intentioned and well-meaning people who fear that the
worst is hiding behind the wall of secrecy -- it has tried to
ask the questions that these people would have asked if they
could have done so. Public Law 103-160 called for all defense
agencies, including the National Security Agency, to cooperate
fully with the National Research Council in this study.

   For obvious reasons, the committee cannot determine if it
did not hear a particular piece of information because an
agency withheld that information or because that piece of
information simply did not exist. But for a number of reasons,
the committee believes that to the best of its knowledge, the
relevant agencies have complied with Public Law 103-160 and
other agencies have cooperated with the committee. One
important reason is that several members of the committee have
had extensive experience (on a classified basis) with the
relevant agencies, and these members heard nothing in the
briefings held for the committee that was inconsistent with
that experience. A second reason is that these agencies had
every motivation and self-interest to make the best possible
case for their respective positions on the issues before the
committee. Thus, on the basis of agency assurances that the
cornrnittee has indeed received all inforrnation relevant to
the issue at hand, they cannot plausibly argue that "if the
committee knew what Agency X knew, it would agree with Agency
X's position."

   This unclassified report does not have a classified annex,
nor is there a classified version of it. After receiving a
number of classified briefings on material relevant to the
subject of this study, the fully cleared members of the
committee (13 out of the total of 16) agree that these
details, while necessarily important to policy makers who need
to decide tomorrow what to do in a specific case, are not
particularly relevant to the larger issues of why policy has
the shape and texture that it does today nor to the general
outline of how technology will and policy should evolve in the
future. For example, the committee was briefed on certain
intelligence activities of various nations. Policy makers care
that the activities of nation X (a friendly nation) fall into
certain categories and that those of nation Y (an unfriendly
nation) fall into other categories, because they must craft a
policy toward nation X in one way and one toward nation Y in
another way. But for analytical purposes, the exact names of
the nations involved are much less relevant than the fact that
there will always be nations friendly and unfriendly to the
United States. Committee members are prepared to respond on a
classified basis if necessary to critiques and questions that
involve classified material.(8)

   As for the time line of this study, the committee was
acutely aware of the speed with which the market and product
technologies evolve. The legislation called for a study to be
delivered within 2 years after the full processing of all
necessary security clearances, and the study committee
accelerated its work schedule to deliver a report in 18 months
from its first meeting (and only 13 months from the final
granting of the last clearance). The delivery date of this
study was affected by the fact that the contract to fund this
study was signed by the Department of Defense on September 30,
1994.

----------

   (6)  Bobby Inman, "Classifying Science: A Government
Proposal ... ," *Aviation Week and Space Technology*, February
8, 1982, p. 10.

   (7)  For example, following press reports of deciphered
Libyan messages before and after a bombing in West Berlin in
which an American soldier died, Libya changed its
communications codes. A senior American official was quoted as
saying that the subsequent Libyan purchase of advanced
cryptographic equipment from a Swiss firm was "one of the
prices [the United States is] paying for having revealed, in
order to marshal support of our allies and public opinion,
that intercepted communications traffic provided evidence that
Libya was behind the bombing of the Berlin disco." See
"Libyans Buy Message-Coding Equipment," *Washington Post*,
April 22, 1986, p. A-8.

   (8)   The point of contact within the National Research
Council for such inquiries is the Computer Science and
Telecommunications Board, National Research Council, 2101
Constitution Avenue, N.W., Washington, D.C. Telephone
202-334-2605 or e-mail CSTB@NAS.EDU.

____________________________________________________________


                    A NOTE FROM THE CHAIR

   The title of this report is *Cryptography's Role in
Securing the Information Society*. The committee chose this
title as one best describing our inquiry and report -- that
is, the committee has tried to focus on the role that
cryptography, as one of a number of tools and technologies,
can play in providing security for an information age society
through, among other means, preventing computer-enabled crimes
and enhancing national security. At the same time, the
committee is not unaware of the acronym for this report --
CRISIS -- and it believes that the acronym is apt.

   From my own standpoint as chair of the NRC Committee to
Study National Cryptography Policy, I believe that the crisis
is a policy crisis, rather than a technology crisis, an
industry crisis, a law enforcement crisis, or an
intelligence-gathering crisis.

   It is not a technology crisis because technologies have
always been two-edged swords. All technologies -- cryptography
included can be used for good or for ill. They can be used to
serve society or to harm it, and cryptography will no doubt be
used for both purposes by different groups. Public policy will
determine in large measure not just the net balance of benefit
and loss but also how much benefit will be derived from
constructive uses of this remarkable technology.

   It is not an industry crisis, nor a law enforcement crisis,
nor an intelligence-gathering crisis, because industry, law
enforcement, and the intelligence establishment have all had
to cope with rapid technological change, and for the most part
the vitality of these enterprises within the nation is a
testament to their successes in so coping.

   But a policy crisis is upon the nation. In the face of an
inevitably growing use of cryptography, our society, acting as
it must through our government as informed by the manifold
forums of our free private processes, has been unable to
develop a consensus behind a coherent national cryptography
policy, neither within its own ranks nor with the private
stakeholders throughout society -- the software industry,
those concerned with computer security, the civil liberties
community, and so on. Indeed, the committee could not even
find a clear written statement of national cryptography policy
that went beyond some very general statements.

   To be sure, a number of Administration proposals have seen
the light of day. The best known of these proposals, the
Clipper initiative, was an honest attempt to address some of
the issues underlying national cryptography policy, but one of
its primary effects was to polarize rather than bring together
the various stakeholders, both public and private. On the
other hand, it did raise public awareness of the issue. In
retrospect, many Administration officials have wished that the
discourse on national cryptography policy could have unfolded
differently, but in fairness we recognize that the
government's task is not easy in view of the deep cleavages of
interest reviewed in this report. In this context, we
therefore saw it as our task, commanded by our statutory
charge, to analyze the underlying reasons for this policy
crisis and the interests at stake, and then to propose an
intelligent, workable and acceptable policy.

   The Committee to Study National Cryptography Policy is a
group of 16 individuals with very diverse backgrounds, a broad
range of expertise, and differing perspectives on the subject.
The committee included individuals with extensive government
service and also individuals with considerable skepticism
about and suspicion of government; persons with great
technical expertise in computers, communications, and
cryptography; and persons with considerable experience in law
enforcement, intelligence, civil liberties, national security,
diplomacy, international trade, and other fields relevant to
the formation of policy in this area. Committee members were
drawn from industry, including telecommunications and computer
hardware and software, and from users of cryptography in the
for-profit and not-for-profit sectors; serving as well were
academics and think-tank experts.(9) The committee was by
design highly heterogeneous, a characteristic intended to
promote discussion and synergy among its members.

   At first, we wondered whether these different perspectives
would allow us to talk among ourselves at all, let alone come
to agreement. But the committee worked hard. The full
committee met for a total of 23 days in which we received
briefings and argued various points; ad hoc subcommittees
attended a dozen or so additional meetings to receive even
more briefings; members of the committee and staff held a
number of open sessions in which testimony from the interested
public was sought and received (including a very well attended
session at the Fifth Annual Conference on Computers, Freedom,
and Privacy in San Francisco in early 1995 and an open session
in Washington, D.C., in April 1995); and the committee
reviewed nearly a hundred e-mail messages sent in response to
its Internet call for input. The opportunity to receive not
only written materials but also oral briefings from a number
of government agencies, vendors, trade associations, and
assorted experts, as well as to participate in the first-ever
cryptography policy meeting of the Organization for Economic
Cooperation and Development and of its Business Industry
Advisory Council, provided the occasion for extended
give-and-take discussions with government officials and
private stakeholders.

   Out of this extended dialogue, we found that coming to a
consensus among ourselves -- while difficult -- was not
impossible. The nature of a consensus position is that it is
invariably somewhat different from a position developed,
framed, and written by any one committee member, particularly
before our dialogue and without comments from other committee
members. Our consensus is a result of the extended learning
and interaction process through which we lived rather than any
conscious effort to compromise or to paper over differences.
The committee stands fully behind its analysis, findings, and
recommendations.

   We believe that our report makes some reasonable proposals
for national cryptography policy. But a proposal is just that
-- a proposal for action. What is needed now is a public
debate, using and not sidestepping the full processes of
government, leading to a judicious resolution of pressing
cryptography policy issues and including, on some important
points, legislative action. Only in this manner will the
policy crisis come to a satisfactory and stable resolution.

----------

   (9)   Note that the committee was quite aware of potential
financial conflicts of interest among several of its members.
In accordance with established National Research Council
procedures, these potential financial conflicts of interest
were thoroughly discussed by the committee; no one with a
direct and substantial financial stake in the outcome of the
report served on the committee.

____________________________________________________________


                       ACKNOWLEDGMENTS

   The full list of individuals (except for those who
explicitly requested anonymity) who provided input to the
cornmittee and the study project is contained in Appendix A.
However, a number of individuals deserve special mention.
Michael Nelson, Office of Science and Technology Policy, kept
us informed about the evolution of Administration policy.
Dorothy Denning of Georgetown University provided many useful
papers concerning the law enforcement perspective on
cryptography policy. Clinton Brooks and Ron Lee from the
National Security Agency and Ed Roback and Raymond Kammer from
the National Institute of Standards and Technology acted as
agency liaisons for the committee, arranging briefings and
providing other information. Marc Rotenberg from the
Electronic Privacy Information Center and John Gilmore from
Cygnus Support provided continuing input on a number of
subjects as well as documents released under Freedom of
Inforrnation Act requests. Rebecca Gould from the Business
Software Alliance, Steve Walker from Trusted Information
Systems, and Ollie Smoot from the Information Technology
Industry Council kept the committee informed from the business
perspective. Finally, the committee particularly acknowledges
the literally hundreds of suggestions and criticisms provided
by the reviewers of an early draft of this report. Those
inputs helped the committee to sharpen its message and
strengthen its presentation, but of course the content of the
report is the responsibility of the committee.

   The committee also received a high level of support from
the Nationai Research Council. Working with the Special
Security Office of the Office of Naval Research, Kevin Hale
and Kimberly Striker of the NRC's National Security Office had
the complex task of facilitating the prompt processing of
security clearances necessary to complete this study in a
timely manner and otherwise managing these security
clearances. Susan Maurizi worked under tight time constraints
to provide editorial assistance. Acting as primary staff for
the committee were Marjory Blumenthal, John Godfrey, Frank
Pittelli, Gail Pritchard, and Herb Lin. Marjory Blumenthal
directs the Computer Science and Telecommunications Board, the
program unit within the National Research Council to which
this congressional tasking was assigned. She sat with the
committee during the great majority of its meetings, providing
not only essential insight into the NRC process but also an
indispensable long-term perspective on how this report could
build on other CSTB work, most notably the 1991 NRC report
*Computers at Risk*. John Godfrey, research associate for
CSTB, was responsible for developing most of the factual
material in most of the appendixes as well as for tracking
down hundreds of loose ends, his prior work on a previous NRC
report on standards also provided an important point of
departure for the committee's discussion on standards as they
apply to cryptography policy. Frank Pittelli is a consultant
to CSTB, whose prior experience in computer and information
security was invaluable in framing a discussion of technical
issues in cryptography policy. Gail Pritchard, project
assistant for CSTB, handled logistical matters for the
committee with the utmost skill and patience as well as
providing some research support to the committee. Finally,
Herb Lin, senior staff officer for CSTB and study director on
this project, arranged briefings, crafted meeting agendas, and
turned the thoughts of committee members into drafts and then
report text. It is fair to say that this study could not have
been carried out nor this report written, especially on our
accelerated schedule, without his prodigious energy and his
extraordinary talents as study director, committee
coordinator, writer, and editor.

Kenneth Dam, Chair
Committee to Study National Cryptography Policy

Chicago, Illinois
March 29, 1996



A Channel for Feedback

   CSTB will be glad to receive comments on this report.
Please send them via Internet e-mail to CRYPTO@NAS.EDU, or via
regular mail to CSTB, National Research Council. 2101
Constitution Avenue NW, Washington, DC 20418.

[End Preface]

____________________________________________________________


                          Contents


PREFACE

   Introduction
   Charge of the Committee to Study National Cryptography
   Policy
   What This Report Is Not
   On Secrecy and Report Time Line
   A Note from the Chair
   Acknowledgments

EXECUTIVE SUMMARY

A ROAD MAP THROUGH THIS REPORT



             PART I -- FRAMING THE POLICY ISSUES


1  GROWING VULNERABILITY IN THE INFORMATION AGE

   1.1  The Technology Context of the Information Age

   1.2  Transitions to an Information Society--Increasing
        Interconnections and Interdependence

   1.3  Coping with Information Vulnerability

   1.4  The Business and Economic Perspective

        1.4.1  Protecting Important Business Information
        1.4.2  Ensuring the Nation's Ability to Exploit
               Global Markets

   1.5  Individual and Personal Interests in Privacy

        1.5.1  Privacy in an Information Economy
        1.5.2  Privacy for Citizens

   1.6  Special Needs of Government

   1.7  Recap


2  CRYPTOGRAPHY: ROLES, MARKET, AND INFRASTRUCTURE

   2.1  Cryptography in Context

   2.2  What Is Cryptography and What Can It Do?

   2.3  How Cryptography Fits into the Big Security Picture

        2.3.1  Technical Factors Inhibiting Access to
               Information
        2.3.2  Factors Facilitating Access to Information

   2.4  The Market for Cryptography

        2.4.1  The Demand Side of the Cryptography Market
        2.4.2  The Supply Side of the Cryptography Market

   2.5  Infrastructure for Widespread Use of Cryptography

        2.5.1  Key Management Infrastructure
        2.5.2  Certificate Infrastructures

   2.6 Recap


3  NEEDS FOR ACCESS TO ENCRYPTED INFORMATION

   3.1  Terminology

   3.2  Law Enforcement: Investigation and Prosecution

        3.2.1  The Value of Access to Information for Law
               Enforcement
        3.2.2  The Legal Framework Governing Surveillance
        3.2.3  The Nature of Surveillance Needs of Law
               Enforcement
        3.2.4  The Impact of Cryptography and New Media on
               Law Enforcement (Stored and Communicated Data)

   3.3  National Security and Signals Intelligence

        3.3.1  The Value of Signals Intelligence
        3.3.2  The Impact of Cryptography on SIGINT

   3.4  Similarities and Differences Between Foreign
        Policy/National Security and Law Enforcement Needs for
        Communications Monitoring

        3.4.1  Similarities
        3.4.2  Differences

   3.5  Business and Individual Needs for Exceptional Access
        to Protected Information

   3.6  Other Types of Exceptional Access to Protected
        Information

   3.7  Recap



                PART II -- POLICY INSTRUMENTS


4  EXPORT CONTROLS

   4.1  Brief Description of Current Export Controls

        4.1.1  The Rationale for Export Controls
        4.1.2  General Description
        4.1.3  Discussion of Current Licensing Practices

   4.2  Effectiveness of Export Controls on Cryptography

   4.3  The Impact of Export Controls on U.S. Information
        Technology Vendors

        4.3.1  De Facto Restrictions on the Domestic
               Availability of Cryptography
        4.3.2  Regulatory Uncertainty Related to Export
               Controls
        4.3.3  The Size of the Affected Market for
               Cryptography
        4.3.4  Inhibiting Vendor Responses to User Needs

   4.4  The Impact of Export Controls on U.S. Economic and
        National Security Interests

        4.4.1  Direct Economic Harm to U.S. Businesses
        4.4.2  Damage to U.S. Leadership in Information
               Technology

   4.5  The Mismatch Between the Perceptions of Government/
        National Security and Those of Vendors

   4.6  Export of Technical Data

   4.7  Foreign Policy Considerations

   4.8  Technology-Policy Mismatches

   4.9  Recap


5  ESCROWED ENCRYPTION AND RELATED ISSUES

   5.1  What Is Escrowed Encryption?

   5.2  Administration Initiatives Supporting Escrowed
        Encryption

        5.2.1  The Clipper Initiative and the Escrowed
               Encryption Standard
        5.2.2  The Capstone/Forteza (sic) Initiative
        5.2.3  The Relaxation of Export Controls on Software
               Products Using "Properly Escrowed" 64-bit
               Encryption
        5.2.4  Other Federal Initiatives in Escrowed
               Encryption

   5.3  Other Approaches to Escrowed Encryption

   5.4  The Impact of Escrowed Encryption on Information
        Security

   5.5  The Impact of Escrowed Encryption on Law Enforcement

        5.5.1  Balance of Crime Enabled vs. Crime Prosecuted
        5.5.2  Impact on Law Enforcement Access to
               Information

   5.6  Mandatory vs. Voluntary Use of Escrowed Encryption

   5.7  Process Through Which Policy on Escrowed Encryption
        Was Developed

   5.8  Affiliation and Number of Escrow Agents

   5.9  Responsibilities and Obligations of Escrow Agents and
        Users of Escrowed Encryption

        5.9.1  Partitioning Escrowed Information
        5.9.2  Operational Responsibilities of Escrow Agents
        5.9.3  Liabilities of Escrow Agents

   5.10 The Role of Secrecy in Ensuring Product Security

        5.10.1 Algorithm Secrecy
        5.10.2 Product Design and Implementation Secrecy

   5.11 The Hardware/Software Choice in Product Implementation

   5.12 Responsibility for Generation of Unit Keys

   5.13 Issues Related to the Administration Proposal to
        Exempt 64-bit Escrowed Encryption in Software

        5.13.1 The Definition of "Proper Escrowing"
        5.13.2 The Proposed Limitation of Key Lengths to 64
               Bits or Less

   5.14 Recap


6  OTHER DIMENSIONS OF NATIONAL CRYPTOGRAPHY POLICY

   6.1  The Communications Assistance for Law Enforcement Act

        6.1.1  Brief Description of and Stated Rationale for
               the CALEA
        6.1.2  Reducing Resource Requirements for Wiretaps
        6.1.3  Obtaining Access to Digital Streams in the
               Future
        6.1.4  The CALEA Exemption of Information Service
               Providers and Distinctions Between Voice and
               Data Services

   6.2  Other Levers Used in National Cryptography Policy

        6.2.1  Federal Information Processing Standards
        6.2.2  The Government Procurement Process
        6.2.3  Implementation of Policy: Fear, Uncertainty,
               Doubt, Delay, Complexity
        6.2.4  R&D Funding
        6.2.5  Patents and Intellectual Property
        6.2.6  Formal and Informal Arrangements with Various
               Other Governments and Organizations
        6.2.7  Certification and Evaluation
        6.2.8  Nonstatutory Influence
        6.2.9  Interagency Agreements Within the Executive
               Branch

   6.3  Organization of the Federal Government with Respect to
        Information Security

        6.3.1  Role of National Security vis-a-vis Civilian
               Information Infrastructures
        6.3.2  Other Government Entities with Influence on
               Information Security

   6.4  International Dimensions of Cryptography Policy

   6.5  Recap



   PART III--POLICY OPTIONS, FINDINGS, AND RECOMMENDATIONS


7  POLICY OPTIONS FOR THE FUTURE

   7.1  Export Control Options for Cryptography

        7.1.1  Dimensions of Choice for Controlling the
               Exportof Cryptography
        7.1.2  Complete Elimination of Export Controls on
               Cryptography
        7.1.3  Transferral of All Cryptography Products to
               the Commerce Control List
        7.1.4  End-use Certification
        7.1.5  Nation-by-Nation Relaxation of Controls and
               Harmonization of U.S. Export Control Policy on
               Cryptography with Export/Import Policies of
               Other Nations
        7.1.6  Liberal Export for Strong Cryptography with
               Weak Defaults
        7.1.7  Liberal Export for Cryptographic Applications
               Programming Interfaces
        7.1.8  Liberal Export for Escrowable Products with
               Encryption Capabilities
        7.1.9  Alternatives to Government Certification of
               Escrow Agents Abroad
        7.1.10 Use of Differential Work Factors in
               Cryptography
        7.1.11 Separation of Cryptography from Other Items on
               the U.S. Munitions List

   7.2  Alternatives for Providing Government Exceptional
        Access to Encrypted Data

        7.2.1  A Prohibition of the Use and Sale of
               Cryptography Lacking Features for Exceptional
               Access
        7.2.2  Criminalization of the Use of Cryptography in
               the Commission of a Crime
        7.2.3  Technical Non-Escrow Approaches for Obtaining
               Access to Information
        7.2.4  Network-based Encryption
        7.2.5  Distinguishing Between Encrypted Voice and
               Data Communications Services for Exceptional
               Access
        7.2.6  A Centralized Decryption Facility for
               Government Exceptional Access

   7.3  Looming Issues

        7.3.1  The Adequacy of Various Levels of Encryption
               Against High-Quality Attack
        7.3.2  Organizing the U.S. Government for Better
               Information Security on a National Basis

   7.4  Recap


8  SYNTHESIS, FINDINGS, AND RECOMMENDATIONS

   8.1  Synthesis and Findings

        8.1.1  The Problem of Information Vulnerability
        8.1.2  Cryptographic Solutions to Information
               Vulnerabilities
        8.1.3  The Policy Dilemma Posed by Cryptography
        8.1.4  National Cryptography Policy for the
               Information Age

   8.2  Recommendations

   8.3  Additional Work Needed

   8.4  Conclusion


                         APPENDIXES

A  Contributors to the NRC Project on National Cryptography
   Policy

B  Glossary

C  A Brief Primer on Cryptography

D  An Overview of Electronic Surveillance: History and Current
   Status

E  A Brief History of Cryptography Policy

F  A Brief Primer on Intelligence

G  The International Scope of Cryptography Policy

H  Summary of Important Requirements for a Public-Key
   Infrastructure

I  Industry-Specific Dimensions of Security

J  Examples of Risks Posed by Unprotected Information

K  Cryptographic Applications Programming Interfaces

L  Laws, Regulations, and Documents Relevant to Cryptography

M  Other Looming Issues Related to Cryptography Policy

N  Federal Information Processing Standards

[End Contents]

____________________________________________________________


                      Executive Summary

   In an age of explosive worldwide growth of electronic data
storage and communications, many vital national interests
require the effective protection of information. When used in
conjunction with other approaches to information security,
cryptography is a very powerful tool for protecting
information. Consequently, current U.S. policy should be
changed to promote and encourage the widespread use of
cryptography for the protection of the information interests
of individuals, businesses, government agencies, and the
nation as a whole, while respecting legitimate national needs
of law enforcement and intelligence for national security and
foreign policy purposes to the extent consistent with good
information protection.


                     BASIC POLICY ISSUES

              The Information Security Problem

   Today's information age requires U.S. businesses to compete
on a worldwide basis, sharing sensitive information with
appropriate parties while protecting that information against
competitors, vandals, suppliers, customers, and foreign
governments (Box ES.1). Private law-abiding citizens dislike
the ease with which personal telephone calls can be tapped,
especially those carried on cellular or cordless telephones.
Elements of the U.S. civilian infrastructure such as the
banking system, the electric power grid, the public switched
telecommunications network, and the air traffic control system
are central to so many dimensions of modern life that
protecting these elements must have a high priority. The
federal government has an important stake in assuring that its
important and sensitive political, economic, law enforcement,
and military information, both classified and unclassified, is
protected from foreign governments or other parties whose
interests are hostile to those of the United States.

____________________________________________________________

   BOX ES.I The Foreign Threat to U.S. Business Ineerests

   Of the wide variety of information risks facing U.S.
companies operating internationally, those resulting from
electronic vulnerabilities appear to be the most significant.
The National Counterintelligence Center (NACIC). an arm of the
U.S. intelligence community established in 1994 by
presidential directive, concluded that "specialized technical
operations (including computer intrusions, telecommunications
targeting and intercept, and private-sector encryption
weaknesses) account for the largest portion of economic and
industrial information lost by U.S. corporations."
Specifically, the NACIC noted that

   [b]ecause they are so easily accessed and intercepted,
   corporate telecommunications --particularly international
   telecommunications -- provide a highly vulnerable and
   lucrative source for anyone interested in obtaining trade
   secrets or competitive information. Because of the
   increased usage of these links for bulk computer data
   transmission and electronic mail, intelligence collectors
   find telecommunications intercepts cost-effective. For
   example, foreign intelligence collectors intercept
   facsimile transmissions through government-owned telephone
   companies, and the stakes are large -- approximately half
   of all overseas telecommunications are facsimile
   transmissions. Innovative "hackers" connected to computers
   containing competitive information evade the controls and
   access companies' information. In addition, many American
   companies have begun using electronic data interchange, a
   system of transferring corporate bidding, invoice, and
   pricing data electronically overseas. Many foreign
   government and corporate intelligence collectors find this
   information invaluable.

----------

SOURCE: National Counterintelligence Center, Annual Report
to Congress on Foreign Economic Collection and Industrial
Espionage, July 1995, pages 16-17.

____________________________________________________________


 Cryptographic Dimensions of Information Security Solutions

   Information vulnerabilities cannot be eliminated through
the use of any single tool. For example, it is impossible to
prevent with technical means a party authorized to view
information from improperly disclosing that information to
someone else. However, as part of a comprehensive approach to
addressing information vulnerabilities, cryptography is a
powerful tool that can help to assure the confidentiality and
integrity of information in transit and in storage and to
authenticate the asserted identity of individuals and computer
systems. Information that has been properly encrypted cannot
be understood or interpreted by those lacking the appropriate
cryptographic "key"; information that has been integrity-
checked cannot be altered without detection. Properly
authenticated identities can help to restrict access to
information resources to those properly authorized individuals
and to take fuller advantage of audit trails to track down
parties who have abused their authorized access.



       Law Enforcement and National Security Dilemmas
                    Posed by Cryptography

   For both law enforcement and national security,
cryptography is a two-edged sword. The public debate has
tended to draw lines that frame the policy issues as the
privacy of individuals and businesses against the needs of
national security and law enforcement. While such a dichotomy
does have a kernel of truth, when viewed in the large, this
dichotomy is misleading. If cryptography can protect the trade
secrets and proprietary information of businesses and thereby
reduce economic espionage (which it can), it also supports in
a most important manner the job of law enforcement. If
cryptography can help protect nationally critical information
systems and networks against unauthorized penetration (which
it can), it also supports the national security of the United
States. Framing discussion about national cryptography policy
in this larger law enforcement and national security context
would help to reduce some of the polarization among the
relevant stakeholders.

   On the other hand, cryptography intended primarily to
maintain the confidentiality of information that is available
to the general public for legitimate purposes such as
defending against information theft is also available for
illegitimate purposes such as terrorism. Encryption thus does
poses a threat to the capability that law enforcement
authorities may seek under appropriate legal authorization to
gain access to information for the purpose of investigating
and prosecuting criminal activity. Encryption also poses a
threat to intelligence gathering for national security and
foreign policy purposes, an activity that depends on access to
information of foreign governments and other foreign entities.

   Note that other applications of cryptography -- for
purposes of assuring data integrity and authenticating
identities of users and computer systems -- do not pose
dilemmas for law enforcement and national security in the same
way that confidentiality does.


    National Cryptography Policy for the Information Age

   For many years, concern over foreign threats to national
security has been the primary driver of a national
cryptography policy that has sought to maximize the protection
of U.S. military and diplomatic communications while denying
the confidentiality benefits of cryptography to foreign
adversaries through the use of export controls on cryptography
and related technical data. More recently, the U.S. government
has aggressively promoted the domestic use of a certain kind
of cryptography escrowed encryption -- that would provide
strong protection for legitimate uses but would permit legally
authorized access by law enforcement officials when authorized
by law. Today, these and other dimensions of current national
cryptography policy generate considerable controversy.

   All of the various stakes are legitimate: privacy for
individuals, protection of sensitive or proprietary
information for businesses, ensuring the continuing
reliability and integrity of nationally critical information
systems and networks, law enforcement access to stored and
communicated information for purposes of investigating and
prosecuting crime, and national security access to information
stored or communicated by foreign powers or other entities and
organizations whose interests and intentions are relevant to
the national security and the foreign policy interests of the
United States. Informed public discussion of the issues must
begin by acknowledging the legitimacy both of information
gathering for law enforcement and national security purposes
and of information security for law-abiding individuals and
businesses.

   The conduct of the debate regarding national cryptography
policy has been complicated because a number of participants
have often invoked classified information that cannot be made
public. However, the cleared members of the National Research
Council's Committee to Study National Cryptography Policy (13
of the 16 committee members) concluded that *the debate over
national cryptography policy can be carried out in a
reasonable manner on an unclassified basis*. Classified
material is often important to operational matters in specific
cases, but it is neither essential to the big picture of why
cryptography policy is the way it is nor required for the
general outline of how technology will and policy should
evolve in the future.

   The problems of information vulnerability, the legitimacy
of the various national interests described above, and trends
such as those outlined in Box ES.2 point to the need for a
concerted effort to protect vital information assets of the
United States. Cryptography is one important element of a
comprehensive U.S. policy for better information security.

   The committee believes that *U.S. national policy should be
changed to support the broad use of cryptography in ways that
take into account competing U.S. needs and desires for
individual privacy, international economic competitiveness,
law enforcement, national security, and world leadership*.
Because cryptography is an important tool for protecting
information and because it is very difficult for governments
to control, the committee believes that the widespread
nongovernment use of cryptography in the United States and
abroad is inevitable in the long run. Accordingly, the proper
role of national cryptography policy is to facilitate a
judicious transition between today's world of high information
vulnerability and a future world of greater information
security, while to the extent possible meeting the legitimate
needs of law enforcement and information gathering for
national security and foreign policy purposes.

   The committee found that *current national cryptography
policy is not adequate to support the information security
requirements of an information society*. Indeed, current
policy discourages the use of cryptography, whether
intentionally or not, and in so doing impedes the ability of
the nation to use cryptographic tools that would help to
remediate certain important vulnerabilities. National
cryptography policy should support three objectives:

   1.   Broad availability of cryptography to all legitimate
        elements of U.S. society;

   2.   Continued economic growth and leadership of key U.S.
        industries and businesses in an increasingly global
        economy, including but not limited to U.S. computer,
        software, and communications companies; and

   3.   Public safety and protection against foreign and
        domestic threats.

   Objectives 1 and 2 argue for a policy that places few
government restrictions on the use of cryptography and
actively promotes the use of cryptography on a broad front.
Objective 3 argues that some kind of government policy role in
the deployment and use of cryptography for confidentiality may
continue to be necessary for public safety and national
security reasons. These three objectives can be met within a
framework recognizing that *on balance, the advantages of more
widespread use of cryptography outweigh the disadvantages*.

____________________________________________________________

       BOX ES.2 The Past and Future World Environment

Past                          Future Trends
_______________________       _________________________________
Computing and                 Computer and information
communications networks       acquisition, retrieval and
were expensive and            processing are inexpensive and
rare.                         ubiquitious. Rapid growth is
                              evident in the development and
                              deployment of diverse technology-
                              based services.

Communications networks       Communications networks are
were analog and voice         digital and oriented toward video
oriented;                     and data trasnmissions.
communications made           Communications made heavy use of
heavy use of dedicated        shared infrastructure and
lines.                        media (e.g., satellites,
                              wireless). Passive eavesdropping
                              is thus harder to detect.

Telecommunications was        Telecommunications involves a
controlled by a small         large number of players.
number of players.

The U.S. economy was          The U.S. economy is important but
unquestionably dominant       not dominant in the world, and it
in the world.                 is increasingly interlinked with
                              allies, customers, suppliers,
                              vendors, and competitors all over
                              the world.

The economy was               The economy is oriented toward
oriented toward               information and services.
material production.

The security threat was       Security threats are much more
relatively homogeneous        heterogeneous than in the Cold
(Soviet Union and Cold        War, both in origin and in
War).                         nature.

Cryptography was used         Cryptography has important
primarily for military        applications throughout all
and diplomatic                aspects of society.
purposes. Government          Nongovernmental entities have
had a relative monopoly       significant expertise and
on cryptographic              capability built on an open,
expertise and                 public, and expanding base of
capability.                   scientific and technical
                              knowledge about cryptography.

____________________________________________________________


   The recommendations below address several critical policy
areas. In the interests of brevity, only short rationales for
the recommendations are given here. The reader is urged to
read Chapter 8 of the report for essential qualifications,
conditions, and explanations.


        A FRAMEWORK FOR NATIONAL CRYPTOGRAPHY POLICY

   The framework for national cryptography policy should
provide coherent structure and reduce uncertainty for
potential vendors and for nongovernment and government users
of cryptography in ways that policy does not do today.

*Recommendation 1: No law should bar the manufacture, sale, or
use of any form of encryption within the United States*.
Specifically, a legislative ban on the use of unescrowed
encryption would raise both technical and legal or
constitutional issues. Technically, many methods are available
to circumvent such a ban; legally, constitutional issues,
especially those related to free speech, would be almost
certain to arise, issues that are not trivial to resolve.
Recommendation 1 is made to reinforce this particular aspect
of the Administration's cryptography policy.

*Recommendation 2: National cryptography policy should be
developed by the executive and legislative branches on the
basis of open public discussion and governed by the rule of
law*. Only a national discussion of the issues involved in
national cryptography policy can result in the broadly
acceptable social consensus that is necessary for any policy
in this area to succeed. A consensus derived from such
deliberations, backed by explicit legislation when necessary,
will lead to greater degrees of public acceptance and trust,
a more certain planning environment, and better connections
between policy makers and the private sector on which the
nation's economy and social fabric rest.

*Recommendation 3: National cryptography policy affecting the
development and use of commercial cryptography should be more
closely aligned with market forces*. As cryptography has
assumed greater importance to nongovernment interests,
national cryptography policy has become increasingly
disconnected from market reality and the needs of parties in
the private sector. Experience with technology deployment
suggests that reliance on market forces is generally the most
effective way to promote the widespread use of a new
technology. Since the committee believes that widespread
deployment and use of cryptography are in the national
interest, it believes that national cryptography policy should
align itself with user needs and market forces to the maximum
feasible extent. Accordingly, national cryptography policy
should emphasize the freedom of domestic users to determine
cryptographic functionality, protection, and implementations
according to their security needs as they see fit; encourage
the adoption of cryptographic standards by the federal
government and private parties that are consistent with
prevailing industry practice; and support the use of
algorithms, product designs, and product implementations that
are open to public scrutiny.


                       EXPORT CONTROLS

   For many years, the United States has controlled the export
of cryptographic technologies, products, and related technical
information as munitions (on the U.S. Munitions List
administered by the State Department). However, the current
export control regime for cryptography is an increasing
impediment to the information security efforts of U.S. firms
competing and operating in world markets, developing strategic
alliances internationally, and forming closer ties with
foreign customers and suppliers. Export controls also have had
the effect of reducing the domestic availability of products
with strong encryption capabilities. Looking to the future,
both U.S. and foreign companies have the technical capability
to integrate high-quality cryptographic features into their
products and services. U.S. export controls may stimulate the
growth of significant foreign competition for U.S. vendors to
the detriment of both U.S. national security interests and
U.S. business and industry.

   Some relaxation of today's export controls on cryptography
is warranted. Relaxation would create an environment in which
U.S. and multinational firms and individuals could use the
same security products in the United States and abroad,
thereby supporting better information security for U.S. firms
operating internationally. It would also increase the
availability of good cryptography products in the United
States. Finally, it would help to solidify U.S. leadership in
a field critical to national security and economic
competitiveness.

   At the same time, cryptography is inherently dual-use in
character, with important applications to both civilian and
military purposes. Because cryptography is a particularly
critical military application for which few technical
alternatives are available, retention of some export controls
on cryptography will mitigate the loss to U.S. national
security interests in the short term, allow the United States
to evaluate the impact of relaxation on national security
interests before making further changes, and "buy time" for
U.S. national security authorities to adjust to a new
technical reality.

*Recommendation 4: Export controls on cryptography should be
progressively relaxed but not eliminated*.

   *Recommendation 4.1 -- Products providing confidentiality
at a level that meets most general commercial requirements
should be easily exportable.(1) Today, products with
encryption capabilities that incorporate the 56-bit DES
algorithm provide this level of confidentiality and should be
easily exportable*. As a condition of export, vendors of
products covered under this recommendation 4.1 (and 4.2 below)
would be required to provide to the U.S. government full
technical specifications of their product and reasonable
technical assistance upon request in order to assist the U.S.
government in understanding the product's internal operations.

   *Recommendation 4.2 -- Products providing stronger
confidentiality should be exportable on an expedited basis to
a list of approved companies if the proposed product user is
willing to provide access to decrypted information upon
legally authorized request*. Firms on the list would agree to
abide by a set of requirements described in Chapter 8 that
would help to ensure the ability of the U.S. government to
obtain the plaintext of encrypted information upon
presentation of a proper law enforcement request. (Plaintext
is the information that was initially encrypted.)

   *Recommendation 4.3 -- The U.S. government should
streamline and increase the transparency of the export
licensing process for cryptography*. Greater efforts in this
area would reduce uncertainty regarding rules, time lines, and
the criteria used in making decisions about the exportability
of particular products. Chapter 8 describes specific possible
steps that might be taken.

----------

   (1)  For purposes of Recommendation 4.1, a product that is
"easily exportable" will automatically qualify for treatment
and consideration (i.e., commodity jurisdiction, or CJ) under
the CCL. Automatic qualification refers to the same procedure
under which software products using RC2 or RC4 algorithms for
confidentiality with 40-bit key sizes currently qualify for
the CCL.

____________________________________________________________


            ADJUSTING TO NEW TECHNICAL REALITIES

   As noted above, cryptography is helpful to some dimensions
of law enforcement and national security and harmful to
others. The committee accepts that the onset of an information
age is likely to create many new challenges for public safety,
among them the greater use of cryptography by criminal
elements of society. If law enforcement authorities are unable
to gain access to the encrypted communications and stored
information of criminals, some criminal investigations and
prosecutions will be significantly impaired. For these
reasons, specific steps should be taken to mitigate these
difficulties. In the realm of national security, new
capabilities are needed to better cope with the challenges
that cryptography presents.

   Since 1993, the approach of the U.S. government to these
problems has been an aggressive promotion of escrowed
encryption (see Chapter 5) as a pillar of the technical
foundation for national cryptography policy, primarily in
response to the law enforcement concerns described above.
Initiatives promoted by the U.S. government include the
Escrowed Encryption Standard (a voluntary Federal Information
Processing Standard for secure voice telephony), the
Capstone/Fortezza initiative that provides escrowed encryption
capabilities for secure data storage and communications, and
a recent proposal to liberalize export controls on certain
encryption products if the keys are "properly escrowed."

   The committee understands the Administration's rationale
for promoting escrowed encryption but believes that escrowed
encryption should be only one part of an overall strategy for
dealing with the problems that encryption poses for law
enforcement and national security. The committee's view of an
appropriate overall strategy is described below, and escrowed
encryption is the focus of Recommendation 5.3.

*Recommendation 5: The U.S. government should take steps to
assist law enforcement and national security to adjust to new
technical realities of the information age*. Over the past 50
years, both law enforcement and national security authorities
have had to cope with a variety of changing technological
circumstances. For the most part, they have coped with these
changes quite well. Today, however, "business as usual" will
not suffice to bring agencies responsible for law enforcement
and national security into the information age. At the same
time, both law enforcement and national security have
demonstrated considerable adaptability to new environments;
this record of adaptability provides considerable confidence
that they can adapt to a future of digital communications and
stored data as well.

   The specific subrecommendations that follow attempt to
build on this record. They are intended to support law
enforcement and national security missions in their totality
-- for law enforcement, in both crime prevention and crime
prosecution and investigation; for national security, in both
defense of nationally critical information systems and the
collection of intelligence information.

   *Recommendation 5.1 -- The U.S. government should actively
encourage the use of cryptography in nonconfidentiality
applications such as user authentication and integrity
checks*. These applications are particularly important in
addressing vulnerabilities of nationally critical information
systems and networks. Furthermore, these applications of
cryptography are important crime-fighting measures. To date,
national cryptography policy has not fully supported such
nonconfidentiality uses. Some actions have been taken in this
area, but these actions have sometimes conflicted with
government concerns about confidentiality. As importantly,
government has expressed considerably more concern in the
public debate regarding the deleterious impact of widespread
cryptography used for confidentiality than over the
deleterious impact of not deploying cryptographic capabilities
for user authentication and data integrity. Chapter 8 provides
a number of illustrative examples to demonstrate what specific
actions government can take to promote nonconfidentiality
applications of cryptography.

   *Recommendation 5.2 -- The U.S. government should promote
the security of the telecommunications networks more actively.
At a minimum, the U.S. government should promote the link
encryption of cellular communications (2) and the improvement
of security at telephone switches*. Such steps would not
diminish government access for lawfully authorized wiretaps
through the requirements imposed on carriers today to
cooperate with law enforcement in such matters. Furthermore,
by addressing public demands for greater security in voice
communications that are widely known to be nonsecure through
the telecommunications service providers, these measures would
also reduce the demand for (and thus the availability of)
devices used to provide end-to-end encryption of voice
communications. Without a ready supply of such devices, a
criminal user would have to go to considerable trouble to
obtain a device that could thwart a lawfully authorized
wiretap.

   *Recommendation 5.3 -- To better understand how escrowed
encryption might operate, the U.S. government should explore
escrowed encryption for its own uses. To address the critical
international dimensions of escrowed communications, the U.S.
government should work with other nations on this topic*.
Escrowed encryption has both benefits and risks. The benefits
for law enforcement and national security are that when
escrowed encryption is properly implemented and widely
deployed, law enforcement and national security authorities
will be able to obtain access to escrow-encrypted data in
specific instances when authorized by law. Escrowed encryption
also enables end users to recover encrypted stored data to
which access has been inadvertently lost. The risk to end
users is that escrowed encryption provides a potentially lower
degree of confidentiality because it is specifically designed
to permit exceptional access by parties not originally
intended to have access to the encrypted data.

   Aggressive government promotion of escrowed encryption is
not appropriate at this time for several reasons: the lack of
operational experience with how a large-scale infrastructure
for escrowed encryption would work; the lack of demonstrated
evidence that escrowed encryption will solve the most serious
problems that law enforcement authorities face; the likely
harmful impact on the natural market development of
applications made possible by new information services and
technologies; and the uncertainty of the market response to
such aggressive promotion. At the same time, many policy
benefits can be gained by an operational exploration of
escrowed encryption by the U.S. government for government
applications; such exploration would enable the U.S.
government to develop the base of experience on which to build
a more aggressive promotion of escrowed encryption should
circumstances develop in such a way that encrypted
communications come to pose a significant problem for law
enforcement.

   *Recommendation 5.4 -- Congress should seriously consider
legislation that would impose criminal penalties on the use of
encrypted communications in interstate commerce with the
intent to commit a federal crime*. The purpose of such a
statute would be to discourage the use of cryptography for
illegitimate purposes, thus focusing the weight of the
criminal justice system on individuals who were in fact guilty
of criminal activity rather than on law-abiding citizens and
criminals alike. Any statute in this area should be drawn
narrowly.

   *Recommendation 5.5 -- High priority should be given to
research, development, and deployment of additional technical
capabilities for law enforcement and national security to cope
with new technological challenges. Such R&D should be
undertaken during the time that it will take for cryptography
to become truly ubiquitous. These new capabilities are almost
certain to have a greater impact on future information
collection efforts than will aggressive attempts to promote
escrowed encryption to a resistant market.

----------

   (2)  "Link encryption" refers to the practice of encrypting
information being communicated in such a way that it is
encrypted only in between the node from which it is sent and
the node where it is received; while the information is at the
nodes themselves, it is unencrypted. In the context of link
encryption for cellular communications, a cellular call would
be encrypted between the mobile handset and the ground
station. When carried on the landlines of the telephone
network, the call would be unencrypted.

____________________________________________________________


               THE POLICY RELATIONSHIP BETWEEN
            INFORMATION SECURITY AND CRYPTOGRAPHY

   Although this report is concerned primarily with national
cryptography policy, any such policy is only one component of
a national information security policy. Without a
forward-looking and comprehensive national information
security policy, changes in national cryptography policy may
have little operational impact on U.S. information security.

*Recommendation 6: The U.S. government should develop a
mechanism to promote information security in the private
sector*. As is widely acknowledged, the U.S. government is not
well organized to meet the challenges presented by an
information society, and no government agency has the
responsibility to promote information security in the private
sector. Absent a coordinated approach to promoting information
security, the needs of many stakeholders may well be given
inadequate attention and notice; those who are pursuing
enhanced information security and those who have a need for
legal access to stored or communicated information must both
be included in a robust process for managing the
often-competing issues and interests that will inevitably
arise over time. Government has an important role in actively
promoting the security of information systems and networks
critical to the nation's welfare (e.g., the banking and
financial system, the public switched telecommunications
network, the air traffic control system, the electric power
grid). In other sectors of the economy, the role of the U.S.
government should be limited to providing information and
expertise. Chapter 8 provides some illustrative examples of
what the government might do to promote information security
in the private sector.


                         CONCLUSION

   The committee believes that its recommendations will lead
to enhanced confidentiality and protection of information for
individuals and companies, thereby reducing economic and
financial crimes and economic espionage from both domestic and
foreign sources. In addition, they will result in improved
security and assurance for the information systems and
networks used by the nation -- a more secure national
information infrastructure. While the recommendations will in
these ways contribute to the prevention of crime and enhance
national security, the committee recognizes that the spread of
cryptography will increase the burden of those in government
charged with carrying out certain specific law enforcement and
intelligence activities. It believes that widespread
commercial and private use of cryptography in the United
States and abroad is inevitable in the long run and that its
advantages, on balance, outweigh its disadvantages. Thus, the
committee concluded that the overall interests of the
government and the nation would best be served by a policy
that fosters a judicious transition toward the broad use of
cryptography.

[End Executive Summary]

____________________________________________________________


               A Road Map Through This Report


   This report responds to a request made in the Defense
Authorization Act of FY 1994 by the U.S. Congress for the
National Research Council to conduct a comprehensive study of
national cryptography policy, a subject that has generated
considerable controversy in the past few years.

   This report is organized into three parts. Part I frames
the policy issues. Chapter 1 outlines the problem of growing
information vulnerability and the need for technology and
policy to mitigate this problem. Chapter 2 describes possible
roles for cryptography in reducing information vulnerability
and places cryptography into context as one element of an
overall approach to ensuring information security. Chapter 3
discusses nongovernment needs for access to encrypted
information and related public policy issues, specifically
those related to information gathering for law enforcement and
national security purposes.

   Part II of this report describes the instruments and goals
of current U.S. cryptography policy and some of the issues
raised by current policy. Chapter 4 is concerned primarily
with export controls on cryptography, a powerful tool that has
long been used in support of national security objectives but
whose legitimacy has come under increasing fire in the last
several years. Chapter 5 addresses escrowed encryption, an
approach aggressively promoted by the federal government as a
technique for balancing national needs for information
security with those of law enforcement and national security.
Chapter 6 discusses other dimensions of national cryptography
policy, including the Digital Telephony Act of 1995 (aka the
Communications Assistance for Law Enforcement Act) and a
variety of other levers used in national cryptography policy
that do not often receive much attention in the debate.

   Part III has two goals enlarging the space of possible
policy options, and offering findings and recommendations.
Chapter 7 discusses a variety of options for cryptography
policy, some of which have been suggested or mentioned in
different forums (e.g., in public and/or private input
received by the committee, or by various members of the
committee). These policy options include alternative export
control regimes for cryptography and alternatives for
providing third-party access capabilities when necessary. In
addition, Chapter 7 addresses several issues related to or
affected by cryptography that will appear on the horizon in
the foreseeable future. Chapter 8 describes the committee's
findings and recommendations.

   A set of appendixes provides more detail where needed.

[End Road Map]

____________________________________________________________








[Head note all pages: May 30, 1996, Prepublication Copy
Subject to Further Editorial Correction]


                           Part I

                  Framing the Policy Issues


   Part I is intended to explicate the fundamental issues
underlying national cryptography policy. Chapter 1 outlines
basic elements of a critical problem facing the nation -- the
increasing vulnerability of information, a commodity that has
become essential to national well-being and future
opportunity. This vulnerability results from a number of
trends, including the explosive growth of digital
communications and data storage, the increasingly
international dimensions of business, and the growing
dependence of the nation on a number of critical information
systems and networks. Chapter 2 describes how cryptography can
play an important role in reducing the information
vulnerability of the nation, of businesses, and of private
individuals. Chapter 2 also places cryptography into context,
as one element of an overall approach to information security,
as a product that responds to factors related to both supply
and demand, and as a technology whose largescale use requires
a supporting infrastructure. Chapter 3 discusses public policy
issues raised by the need for access to encrypted information.
The prospect of near-absolute confidentialty of information --
a prospect enabled by modern cryptography -- is reassuring to
some and quite disturbing to others. Important public policy
issues are raised by law enforcement authorities, who regard
the ability to obtain information surreptitiously but legally
as essential to their crime-fighting abilities, and by
national security authorities, who place a high value on the
ability to monitor the communications of potential
adversaries. Even private individuals, who might wish to
encrypt records securely, may face the need to recover their
data as though they were outsiders if they have forgotten how
to gain "legitimate" access; the same is true for businesses
in some situations.

____________________________________________________________


                              1

        Growing Vulnerability in the Information Age


   Chapter 1 frames a fundamental problem facing the United
States today -- the need to protect against the growing
vulnerability of information to unauthorized access and/or
change as the nation makes the transition from an industrial
age to an information age. Society's reliance on a changing
panoply of information technologies and technology-enabled
services, the increasingly global nature of commerce and
business, and the ongoing desire to protect traditional
freedoms as well as to ensure that government remains capable
of fulfilling its responsibilities to the nation all suggest
that future needs for information security will be large.
These factors make clear the need for a broadly acceptable
national cryptography policy that will help to secure vital
national interests.


      1.1 THE TECHNOLOGY CONTEXT OF THE INFORMATION AGE

   The information age is enabled by computing and
communications technologies (collectively known as information
technologies) whose rapid evolution is almost taken for
granted today. Computing and communications systems appear in
virtually every sector of the economy and increasingly in
homes and other locations. These systems focus economic and
social activity on information -- gathering, analyzing,
storing, presenting, and disseminating information in text,
numerical, audio, image, and video formats -- as a product
itself or as a complement to physical or tangible products.(1)

   Today's increasingly sophisticated information technologies
cover a wide range of technical progress:

   +    *Microprocessors and workstations* are increasingly
important to the computing infrastructure of companies and the
nation. Further increases in speed and computational power
today come from parallel or distributed processing with many
microcomputers and processors rather than faster
supercomputers.

   +    *Special-purpose electronic hardware* is becoming
easier to develop. Thus, it may make good sense to build
specialized hardware optimized for performance, speed, or
security with respect to particular tasks; such specialized
hardware will in general be better adapted to these purposes
than general-purpose machines applied to the same tasks.

   +    *Media* for transporting digital information are
rapidly becoming faster (e.g., fiber optics instead of coaxial
cables), more flexible (e.g., the spread of wireless
communications media), and less expensive (e.g., the spread of
CD-ROMs as a vehicle for distributing digital information).
Thus, it becomes feasible to rely on the electronic
transmission of larger and larger volumes of information and
on the storage of such volumes on ever-smaller physical
objects.

   +    *Convergence* of technologies for communications and
for computing. Today, the primary difference between
communications and computing is the distance traversed by data
flows: in communications, the traversed distance is measured
in miles (e.g., two people talking to each other), while in
computing the traversed distance is measured in microns (e.g.,
between two subcomponents on a single integrated circuit). A
similar convergence affects companies in communications and in
computing -- their boundaries are blurring, their scopes are
changing, and their production processes overlap increasingly.

   +    *Software* is increasingly carrying the burden of
providing functionality in information technology. In general,
software is what gives hardware its functional capabilities,
and different software running on the same hardware can change
the functionality of that hardware entirely. Since software is
intangible, it can be deployed widely on a very short time
scale compared to that of hardware. Box 1.1 contains more
discussion of this point.

   As these examples suggest, information technologies are
ever more affordable and ubiquitous. In all sectors of the
economy, they drive demand for information systems; such
demand will continue to be strong and experience significant
growth rates. High-bandwidth and/or wireless media are
becoming more and more common. Interest in and use of the
Internet and similar public networks will continue to
experience very rapid growth.

----------

   (1)  Citations to a variety of press accounts can be found
in Computer Science and Telecommunications Board (CSTB),
National Research Council, *Information Technology and
Manufacturing: A Research Agenda*, National Academy Press,
Washington, D.C., 1993; CSTB, *Information Technology in the
Service Society: A Twenty-First Century Lever*, 1993; CSTB,
*Realizing the Information Future: The Internet and Beyond*,
1994; CSTB, *Keeping the Computer and Communications Industry
Competitive: Convergence of Computing, Communications, and
Entertainment*, 1995; and CSTB, *The Unpredictable Certainty:
Information Infrastructure Through 2000*, 1996.

____________________________________________________________


         1.2 TRANSITION TO AN INFORMATION SOCIETY --
       INCREASING INTERCONNECTIONS AND INTERDEPENDENCE

   As the availability and use of computer-based systems grow,
so, too, does their interconnection. The result is a shared
infrastructure of information, computing, and communications
resources that facilitates collaboration at a distance,
geographic dispersal of operations, and sharing of data. With
the benefits of a shared infrastructure also come costs.
Changes in the technology base have created more
vulnerabilities, as well as the potential to contain them. For
example, easier access for users in general implies easier
access for unauthorized users.

   The design, mode of use, and nature of a shared
infrastructure create vulnerabilities for all users. For
national institutions such as banking, new risks arise as the
result of greater public exposure through such
interconnections. For example, a criminal who penetrates one
bank interconnected to the world's banking system can steal
much larger amounts of money than are stored at that one bank.
(Box 1.2 describes a recent electronic bank robbery.) Reducing
vulnerability to breaches of security will depend on the
ability to identify and authenticate people, systems, and
processes and to assure with high confidence that information
is not improperly manipulated, corrupted, or destroyed.

   Although society is entering an era abounding with new
capabilities, many societal practices today remain similar to
those of the 1960s and 1970s, when computing was dominated by
large, centralized mainframe computers. In the 1980s and
1990s, they have not evolved to reflect the introduction of
personal computers, portable computing, and increasingly
ubiquitous communications networks. Thus, people continue to
relinquish control over substantial amounts of personal
information through credit card transactions, proliferating
uses of Social Security numbers, and participation in
frequent-buyer programs with airlines and stores.
Organizations implement trivial or no protection for
proprietary data and critical systems, trusting policies to
protect portable storage media or relying on simple passwords
to protect information.

   These practices have endured against a backdrop of
relatively modest levels of commercial and individual risk;
for example, the liability of a credit-card owner for credit
card fraud perpetrated by another party is limited by law to
$50. Yet most computer and communications hardware and
software systems are subject to a wide range of
vulnerabilities, as described in Box 1.3. Moreover,
information on how to exploit such vulnerabilities is often
easy to obtain. As a result, a large amount of information
that people say they would like to protect is in fact
available through entirely legal channels (e.g., purchasing a
credit report on an individual) or in places that can be
accessed improperly through technical attacks requiring
relatively modest effort.

   Today, the rising level of familiarity with computer-based
systems is combining with an explosion of experimentation with
information and communications infrastructure in industry,
education, health care, government, and personal settings to
motivate new uses of and societal expectations about the
evolving infrastructure. A key feature of the new environment
is connection or exchange: organizations are connecting
internal private facilities to external public ones; they are
using public networks to create virtual private networks, and
they are allowing outsiders such as potential and actual
customers, suppliers, and business allies to access their
systems directly. One vision of a world of electronic commerce
and what it means for interconnection is described in Box 1.4.

   Whereas a traditional national security perspective might
call for keeping people out of sensitive stores of information
or communications networks, national economic and social
activity increasingly involves the exact opposite: inviting
people from around the world to come in -- with varying
degrees of recognition that all who come in may not be
benevolent. Box 1.5 describes some of the tensions between
security and openness. Such a change in expectations and
perspective is unfolding in a context in which controls on
system access have typically been deficient, beginning with
weak operating system security. The distributed and
internetworked communications systems that are emerging raise
questions about protecting information regardless of the path
traveled (end-to-end security), as close to the source and
destination as possible.

   The international dimensions of business and the growing
importance of competitiveness in the global marketplace
complicate the picture further. Although "multinationals" have
long been a feature of the U.S. economy, the inherently
international nature of communications networks and the
growing capabilities for distributing and accessing
information worldwide are helping many activities and
institutions to transcend national boundaries. (See Box 1.6.)

   At the same time, export markets are at least as important
as domestic U.S. markets for a growing number of goods and
service producers, including producers of information
technology products as well as a growing variety of high- and
low-technology products. The various aspects of globalization
-- identifying product and merchandising needs that vary by
country; establishing and maintaining employment, customer,
supplier, and distribution relationships by country;
coordinating activities that may be dispersed among countries
but result in products delivered to several countries; and so
on -- place new demands on U.S.based and U.S.-owned
information, communication, organizational, and personal
resources and systems.


          1.3 COPING WITH INFORMATION VULNERABILITY

   Solutions to cope with the vulnerabilities described above
require both appropriate technology and user behavior and are
as varied as the needs of individual users and organizations.
Cryptography -- a technology described more fully in Chapter
2 and Appendix C -- is an important element of many solutions
to information vulnerability that can be used in a number of
different ways. National cryptography policy -- the focus of
this report -- concerns how and to what extent government
affects the development, deployment, and use of this important
technology. To date, public discussion of national
cryptography policy has focused on one particular application
of cryptography, namely its use in protecting the
confidentiality of information and communications.

   Accordingly, consideration of national cryptography policy
must take into account two fundamental issues:

   +    If the public information and communications
infrastructure continues to evolve with very weak security
throughout, reflecting both deployed technology and user
behavior, the benefits from cryptography for confidentiality
will be significantly less than they might otherwise be.

   +    The vulnerabilities implied by weak security overall
affect the ability of specific mechanisms such as cryptography
to protect not only confidentiality but also the integrity of
information and systems and the availability of systems for
use when sought by their users. Simply protecting (e.g.,
encrypting) sensitive information from disclosure can still
leave the rest of a system open to attacks that can undermine
the encryption (e.g., the lack of access controls that could
prevent the insertion of malicious software) or destroy the
sensitive information.

   Cryptography thus must be considered in a wider context. It
is not a panacea, but it is extremely important to ensuring
security and can be used to counter several vulnerabilities.

   Recognition of the need for system and infrastructure
security and demand for solutions are growing. Although demand
for solutions has yet to become widespread, the trend is away
from a marketplace in which the federal government (2) was the
only meaningful customer. Growing reliance on a shared
information and communications infrastructure means that all
individuals and organizations should be, and the committee
believes will become, the dominant customers for better
security. That observation is inherent in the concept of
infrastructure as something on which people rely.

   What may be less obvious is that as visions of ubiquitous
access and interconnection are increasingly realized,
individual, organizational, and governmental needs may become
aligned. Such an alignment would mark a major change from the
past. Again, sharing of a common infrastructure is the cause:
everyone, individual or organization, public or private
sector, is a user. As significantly, all of these parties face
a multitude of threats to the security of information (Box
1.7). Consideration of the nation's massive dependence on the
public switched telecommunications network, which is one of
many components of the information and communications
infrastructure, provides insight into the larger set of
challenges posed by a more complex infrastructure (Box 1.8).

   To illustrate the broad panorama of stakeholder interests
in which national cryptography policy is formulated, the next
several sections examine different aspects of society from the
standpoint of needs for information security.

----------

   (2)  The more general statement is that the market
historically involved national governments in several
countries as the principal customers.

____________________________________________________________


          1.4 THE BUSINESS AND ECONOMIC PERSPECTIVE

   For purposes of this report, the relationship of U.S.
businesses to the information society has two main elements.
One element is that of protecting information important to the
success of U.S. businesses in a global marketplace. The second
element is ensuring the nation's continuing ability to exploit
U.S. strengths in information technology on a worldwide basis.


       1.4.1 Protecting Important Business Information

   A wide range of U.S. companies operating internationally
are threatened by foreign information-collection efforts. The
National Counterintelligence Center (NACIC) reports that "the
U.S. industries that have been the targets in most cases of
economic espionage and other foreign collection activities
include biotechnology; aerospace; telecommunications; computer
hardware/software, advanced transportation and engine
technology; advanced materials and coatings; energy research;
defense and armaments technology; manufacturing processes; and
semiconductors."(3) Foreign collectors target proprietary
business information such as bid, contract, customer. and
strategy information, as well as corporate financial and trade
data. Of all of the information vulnerabilities facing U.S.
companies internationally (Box 1.7), electronic
vulnerabilities appear to be the most significant. For
example, the NACIC concluded that "specialized technical
operations (including computer intrusions, telecommunications
targeting and intercept, and private-sector encryption
weaknesses) account for the largest portion of economic and
industrial information lost by U.S. corporations." The NACIC
noted,

   Because they are so easily accessed and intercepted,
   corporate telecommunications -- particularly international
   telecommunications -- provide a highly vulnerable and
   lucrative source for anyone interested in obtaining trade
   secrets or competitive information. Because of the
   increased usage of these links for bulk computer data
   transmission and electronic mail, intelligence collectors
   find telecommunications intercepts cost-effective. For
   example, foreign intelligence collectors intercept
   facsimile transmissions through government-owned telephone
   companies, and the stakes are large -- approximately half
   of all overseas telecommunications are facsimile
   transmissions. Innovative "hackers" connected to computers
   containing competitive information evade the controls and
   access companies' information. In addition, many American
   companies have begun using electronic data interchange, a
   system of transferring corporate bidding, invoice, and
   pricing data electronically overseas. Many foreign
   government and corporate intel]igence collectors find this
   information invaluable.(4)

   Why is electronic information so vulnerable? The primary
reason is that it is computer-readable and thus much more
vulnerable to automated search than are intercepted voice or
postal mail transmissions. Once the information is collected
(e.g., through an existing wiretap or a protocol analyzer on
an Internet router), it is relatively simple for computers to
search streams of electronic information for word combinations
of interest (e.g., "IBM," "research," and "superconductivity"
in the same message). As the cost of computing drops, the cost
of performing such searches drops.(5) The threat posed by
automated search, coupled with the sensitivity of certain
communications that are critical for nongovernment users, is
at the root of nongovernment demand for security.(6)

   Note that solutions for coping with information-age
vulnerabilities may well create new responsibilities for
businesses. For example, businesses may have to ensure that
the security measures they take are appropriate for the
information they are protecting, and/or that the information
they are protecting remains available for authorized use.
Failure to discharge these responsibilities properly may
result in a set of liabilities that these businesses currently
do not face.

   Appendix I of this report elaborates issues of information
vulnerability in the context of key induskies such as banking
and financial services, health care, manufacturing, the
petroleum industry, pharmaceuticals, the entertainment
industry, and government.

----------

   (3)  National Counterintelligence Center, *Annual Report to
Congress on Foreign Economic Collection and Industrial
Espionage*, Washington, D.C., July 1995, p. 15.

   (4)  From the National Counterintelligence Center, *Annual
Report to Congress on Foreign Economic Collection and
Industrial Espionage*, Washington, D.C., July 1995. Further,
intelligence collections by foreign powers are facilitated
when a hostile government interested in eavesdropping controls
the physical environment in which a U.S. company may be
operating. For example, the U.S. company may be in a nation in
which the telecommunications system is under the direct
control of the government. When a potentially hostile
government controls the territory on which a company must
operate, many more compromises are possible.

   (5)  As a rough rule of thumb, Martin Hellman estimates
that 10 billion (10^10) words can be searched for $1. This
estimate is based on an experiment in which Hellman used the
Unix utility program "fgrep" to search a 1 million (10^6)
character file for a specific string of 10 characters known to
be at the end of the file and nowhere else. It took the NeXT
workstation on which this experiment was run approximately 1
second to find these last 10 characters. Since there are
approximately 10^5 seconds in a day and 10^3 days (about 3
years) in the useful life of the workstation, it can search
roughly 10^13 over its life. Since such a workstation is worth
on the order of $1,000 today, this works out to 10^10 words
searched for $1. (With the use of specialized hardware, this
cost could be reduced significantly. For example, in the 1976
Book IV of the Senate Select Committee on Intelligence Report,
R.L. Garwin describes the use of "match registers" to
efficiently implement queries against a database.)

   (6)  Other noncomputer-based technology for the clandestine
gathering of information is widely available on the retail
market. In recent years, concern over the ready availability
of such equipment has grown. See, for example, Ross E. Milloy,
"Spying Toys for Adults or Supplies for Crimes?," *New York
Times*, August 28, 1995, p. A-10; Pam Belluck, "A Shadow over
the Spy-Shop Business," *New York Times*, September 22, 1995,
p. B-3; and James C. McKinley, Jr., "U.S. Agents Raid Stores
in 24 Cities to Seize Spy Gear," *New York Times*, April 6,
1995, p. A-1.

____________________________________________________________


             1.4.2 Ensuring the Nation's Ability
                  to Exploit Global Markets

   With the increasing globalization of business operations,
information technology plays a key role in maintaining the
competitive strengths of U.S. business. In particular, U.S.
businesses have proven adept at exploiting information and
information technologies to create new market niches and
expand old ones. This pattern has deep roots. For example,
beginning in the 1960s, American Airlines pioneered in
computerized reservations systems and extended use of the
information captured and stored in such systems, generating an
entire new business that is more profitable than air kansport
services. More recently, creative uses of information
technology have advanced U.S. leadership in the production of
entertainment products (e.g., movies and videos, recorded
music, on-line services) for the world.

   U.S. innovation in using information technology reflects in
part the economic vitality that makes new technology
affordable. It also reflects proximity to the research and
production communities that supply key information technology
products, communities with which a variety of U.S. industries
have successfully exchanged talent, communicated their needs
as customers, and collaborated in the innovation process. In
other words, it is not an accident that innovation in both use
and production of information technology has blossomed in the
United States.

   The business advantages enjoyed by U.S. companies that use
information technology are one important reason that the
health of U.S. computer, telecommunications, and information
industries is important to the economy as a whole. A second
important reason is the simple fact that the U.S. information
technology sector (the set of industries that supply
information technology goods and services) is the world's
strongest.(7) The industry has an impressive record of product
innovation; key U.S. products are de facto world standards;
U.S. marketing and distribution capabilities for software
products are unparalleled; and U.S. companies have
considerable strengths in the manufacture of specialized
semiconductor technologies and other key components. A strong
information technology sector makes a significant contribution
to the U.S. balance of payments and is responsible for large
numbers of high-paying jobs. These strengths establish a firm
foundation for continued growth in sales for U.S. information
technology products and services as countries worldwide
assimilate these technologies into their economies.

   Finally, because of its technological leadership the United
States should be better positioned to extend that lead, even
if the specific benefits that may result are not known in
advance. The head start in learning how to use information
technology provides a high baseline on which U.S. individuals
and organizations can build.

   The committee believes that information technology is one
of a few high-technology areas (others might include aerospace
and electronics) that play a special role in the economic
health of the nation, and that leadership in this area is one
important factor underlying U.S. economic strength in the
world today.(8) To the extent that this belief is valid, the
economic dimension of national security and perhaps even
traditional national security itself may well depend
critically on a few key industries that are significant to
military capabilities, the industrial base, and the overall
economic health of the nation. Policy that acts against the
health and global viability of these industries or that
damages the ability of the private sector to exploit new
markets and identify niches globally thus deserves the most
careful scrutiny.

   Because it is inevitable that other countries will expand
their installed information technology bases and develop their
own innovations and entrepreneurial strengths, U.S. leadership
is not automatic. Already, evidence of such development is
available, as these nations build on the falling costs of
underlying technologies (e.g., microprocessors, aggregate
communications bandwidth) and worldwide growth in relevant
skills. The past three decades of information technology
history provide enough examples of both successful first
movers and strategic missteps to suggest that U.S. leadership
can be either reinforced or undercut: leadership is an asset,
and it is sensitive to both public policy and private action.

   Public and private factors affecting the competitive health
of U.S. information technology producers are most tightly
coupled in the arena of foreign trade.(9) U.S. producers place
high priority on ease of access to foreign markets. That
access reflects policies imposed by U.S. and foreign
governments, including governmental controls on what can be
exported to whom. Export controls affect foreign trade in a
variety of hardware, software, and communications systems.(10)
They are the subject of chronic complaints from industry, to
which government off1cials often respond by pointing to other,
industry-centered explanations (e.g., deficiencies in product
design or merchandising) for observed levels of foreign sales
and market shares. Chapter 4 addresses export controls in the
context of cryptography and national cryptography policy.

----------

   (7)  For example, a staff study by the U.S. International
Trade Commission found that 8 of the world's top ten
applications software vendors, 7 of the world's top ten
systems software vendors, the top 5 systems integration firms,
and 8 of the top ten custom programming firms are U.S. firms;
the top nine global outsourcing firms have headquarters in the
U.S. See Office of Industries, U.S. International Trade
Commission, *Global Competitiveness of the U.S. Computer
Software and Service Industries*, Staff Research Study #21,
Washington, D.C., June 1995, Chapter 5.

   (8)  The committee acknowledges that there is a wide range
of judgment among responsible economists on this matter. Some
argue that the economy is so diverse that the fate of a single
industry or even a small set of industries has a relatively
small effect on broader economic trends. Others argue that
certain industries are important enough to warrant subsidy or
industrial policy to promote their interests. The committee
discussed this specific issue to a considerable extent and
found a middle ground between these two extremes -- that
information technology is one important industry among others,
and that the health and well-being of that industry are
important to the nation. This position is also supported by
the U.S. government, which notes that telecommunications and
computer hardware/software are among a number of industries
that are of "strategic interest to the United States ...
because they produce classified products for the government,
produce dual use technology used in both the public and
private sectors, and are responsible for leading-edge
technologies critical to maintaining U.S. economic security."
National Counterintelligence Center, *Annual Report to
Congress on Foreign Economic Collection and Industrial
Espionage*, Washington, D.C., July 1995, p. 15.

   (9)  Of course, many intrafirm and intraindustry factors
shape competitive strength, such as good management, adequate
financing, good fit between products and consumer preferences,
and so on.

   (10) See, for example, John Harvey et al, *A Common-Sense
Approach to High-Technology Export Controls*, Center for
International Security and Arms Control, Stanford University,
Stanford, California, March 1995; National Research Council,
*Finding Common Ground: US. Export Controls in a Changed
Global Environment*, National Academy Press, Washington, D.C.,
1991; Computer Science and Telecommunications Board, National
Research Council, *Global Trends in Computer Technology and
Their Impact on Export Control*, National Academy Press,
Washington, D.C., 1988.

____________________________________________________________


      1.5 INDIVIDUAL AND PERSONAL INTERESTS IN PRIVACY

   The emergence of the information age affects individuals as
well as businesses and other organizations. As numerous
reports argue, the nation's information infrastructure
promises many opportunities for self-education, social
exchange, recreation, personal business, cost-effective
delivery of social programs, and entrepreneurship.(11) Yet the
same technologies that enable such benefits may also convey
unwanted side effects. Some of those can be considered
automated versions of problems seen in the paper world; others
are either larger in scale or different in kind. For
individuals, the area relevant to this report is privacy and
the protection of personal information. Increasing reliance on
electronic commerce and the use of networked communication for
all manner of activities suggest that more information about
more people will be stored in network-accessible systems and
will be communicated more broadly and more often, thus raising
questions about the security of that information.

   Privacy is generally regarded as an important American
value, a right whose assertion has not been limited to those
"with something to hide." Indeed, assertion of the right to
privacy as a matter of principle (rather than as all
instrumental action) has figured prominently in U.S. political
and social history; it is not merely abstract or theoretical.

   In the context of an information age, an individual's
privacy can be affected on two levels: privacy in the context
of personal transactions (with businesses or other
institutions and with other individuals), and privacy
vis-a-vis governmental units. Both levels are affected by the
availability of tools, such as cryptography in the context of
information and communications systems, that can help to
preserve privacy. Today's information security technology, for
example, makes it possible to maintain or even raise the cost
of collecting information about individuals. It also provides
more mechanisms for government to help protect that
information. The Clinton Administration has recognized
concerns about the need to guard individual privacy,
incorporating them into the security and privacy guidelines of
its Information Infrastructure Task Force.(12) These
guidelines represent an important step in the process of
protecting individual privacy.

----------

   (11) See, for example, Comnputer Science and
Telecommunications Board (CSTB), National Research Council,
*The Unpredictable Certainty: Information Infrastructure
Through 2000*, National Academy Press, Washington, D.C., 1996;
and CSTB, *The Unpredictable Certainty: Companion Volume of
White Papers*, 1996; CSTB, *The Changing Nature of
Telecommunications/Information Infrastructure*, National
Academy Press, Washington, D.C., 1995.

   (12) Information Infrastructure Task Force, National
Information Infrastructure Security Issues Forum, *NII
Security: The Federal Role*, Washington, D.C., June 5, 1995.

____________________________________________________________


           1.5.1 Privacy in an Information Economy

   Today, the prospect for easier and more widespread
collection and use of personal data as a byproduct of ordinary
activities raises questions about inappropriate activities by
industry, nosy individuals, and/or criminal elements in
society. Criminals may obtain sensitive financial information
to defraud individuals (credit card fraud, for example,
amounts to approximately $20 per card per year). Insurance
companies may use health data collected on individuals to
decide whether to provide or deny health insurance -- putting
concerns about business profitability in possible conflict
with individual and public health needs. On the other hand,
much of the personal data in circulation is willingly divulged
by individuals for specific purposes; the difficulty is that
once shared, such information is available for additional
uses. Controlling the further dissemination of personal data
is a function both of procedures for how information should be
used and of technology (including but not limited to
cryptography) and procedures for restricting access to those
authorized.

   Given such considerations, individuals in an information
age may wish to be able to:

   +    Keep specific information private. Disclosure of
information of a personal nature that could be embarrassing if
known, whether or not such disclosure is legal, is regarded as
an invasion of privacy by many people. A letter to Ann Landers
from a reader described his inadvertent eavesdropping on some
very sensitive financial transactions being conducted on a
cordless telephone.(13) A staff member of this study committee
has heard broadcasts of conversations that apparently emanate
from a next-door baby monitor whose existence has been
forgotten. Home banking services using telephone lines or
network connections and personal computers will result in the
flow on public networks of large amounts of personal
information regarding finances. Even the ad copy in some of
today's consumer catalogues contains references to information
security threats.(14)

   +    Ensure that a party with whom they are transacting
business is indeed the party he or she claims to be. Likewise,
they may seek to authenticate their own identity with
confidence that such authentication will be accepted by other
parties, and that anyone lacking such authentication will be
denied the ability to impersonate them.(15) Such a capability
is needed to transfer money among mutual funds with a
telephone call or to minimize unauthorized use of credit card
accounts.(16) In an electronic domain without face-to-face
communications or recognizable indicators such as voices and
speech patterns (as used today in telephone calls), forgery of
identity becomes increasingly easy.

   +    Prevent the false repudiation of agreed-to
transactions. It is undesirable for a party to a transaction
to be able to repudiate (deny) his agreement to the terms of
the transaction. For example, an individual may agree to pay
a certain price for a given product; he or she should not then
be able to deny having made that agreement (as he or she might
be tempted to do upon finding a lower price elsewhere).

   +    Communicate anonymously (i.e., carry out the opposite
of authenticated communication). Individuals may wish to
communicate anonymously to criticize the government or a
supervisor, report illegal or unethical activity without
becoming further involved, or obtain assistance for a problem
that carries a social stigma. In other instances, they may
simply wish to speak freely without fear of social reprisal or
for the entertainment value of assuming a new digital identity
in cyberspace.

   +    Ensure the accuracy of data that is relevant to them.
Many institutions such as banks, financial institutions, and
hospitals keep records on individuals. These individuals often
have no personal control of these records, even though the
integrity of the data in these records can be of crucial
significance. Occasional publicity attests to instances of the
inaccuracy of such data (e.g., credit records) and to the
consequences for individuals.

   Practical safeguards for privacy such as those outlined
above may be more compelling than abstract or principled
protection of a right to privacy.

----------

   (13) Ann Landers. "Ann Landers," *Washington Post*,
Creators Syndicate, October 20, 1995, p. D-5.

   (14) For example, a catalogue from Comtrad Industries notes
that "burglars use 'Code Grabbers' to open electric garage
doors and break into homes," defining "code grabbers" as
"devices that can record and play back the signal produced
from your garage door remote control." Comtrad Industries, (p.
20, catalogue from 1995). The Herrington catalogue advertises
the "Enigma" phone scrambler by noting that "[a] recent Wall
Street Journal article documents the increasing acceptance and
prevalence of industrial espionage" and mentions as an
"example of the alarming intrusion of the federal government
into citizens' private lives" the fact that "the FBI
petitioned Congress to further expand its wiretapping
authority." Herrington, Winter 1996, p. 13. Note that both of
these mail-order firms cater to mainstream consumer sentiment.

   (15) Is For example, a journalist that had reported on the
trafficking of illegally copied software on America Online was
the victim of hackers that assumed his on-line identity,
thereby intercepting his e-mail messages and otherwise
impersonating him. See Peter Lewis, "Security Is Lost in
Cyberspace," *New York Times*, February 22, 1995, p. D-1.
Other cases of "stolen identities" have been reported in the
press, and while these cases remain relatively isolated, they
are still a matter of public concern. Thieves forge signatures
and impersonate identities of law-abiding citizens to steal
money from bank accounts and to obtain credit cards in the
name of those citizens; see Charles Hall, "A Personal Approach
to Stealing," *Washington Post*, April 1, 1996, p. A-1.

   (16) For example, a recent press article calls attention to
security concerns raised by the ease of access to 401(k)
retirement accounts (for which there is no cap on the
liability incurred if a third party with unauthorized access
to it transfers funds improperly). See Timothy Middleton,
"Will Thieves Crack Your Automated Nest Egg?," *New York
Times*, March 10, 1996, Business Section, p. 10. Another
article describes a half-dozen easy-to-apply methods that can
be used by criminals to undertake fraud. See Albert Crenshaw,
"Creative Credit Card Crooks Draw High-Tech Response,"
*Washington Post*, August 6, 1995, Business Section, p. H-1.

____________________________________________________________


                 1.5.2 Privacy for Citizens

   Public protection of privacy has been less active in the
United States than in other countries, but the topic is
receiving increasing attention. In particular, it has become
an issue in the political agenda of people and organizations
that have a wide range of concerns about the role and
performance of government at all levels; it is an issue that
attracts advocates from across the spectrum of political
opinion. The politicization of privacy may inhibit the orderly
consideration of relevant policy, including cryptography
policy, because it revolves around the highly emotional issue
of trust in government. The trust issue surfaced in the
initial criticisms of the Clipper chip initiative proposal in
1993 (Chapter 5) and continues to color discussion of privacy
policy generally and cryptography policy specifically.

   To many people, freedom of expression and association,
protection against undue governmental, commercial, or public
intrusion into their personal affairs, and fair treatment by
various authorities are concems shaped by memories of highly
publicized incidents in which such rights were flouted.(17) It
can be argued that such incidents were detectable and
correctable precisely because they involved government units
that were obligated to be publicly accountable -- and indeed,
these incidents prompted new policies and procedures as well
as greater public vigilance. It is also easy to dismiss them
as isolated instances in a social system that for the most
part works well. But where these episodes involve government,
many of those skeptical about government believe that they
demonstrate a capacity of government to violate civil
liberties of Americans who are exercising their constitutional
rights.(18) This perception is compounded by attempts to
justify past incidents as having been required for purposes of
national security. Such an approach both limits public
scrutiny and vitiates policy-based protection of personal
privacy.

   It is hard to determine with any kind of certainty the
prevalence of the sentiments described in this section. By
some measures, over half of the public is skeptical about
government in general,(19) but whether that skepticism
translates into widespread public concem about government
surveillance is unclear. The committee believes that most
people acting as private individuals feel that their
electronic communications are secure and do not generally
consider it necessary to take special precautions against
threats to the confidentiality of those communications. These
attitudes reflect the fact that most people, including many
who are highly knowledgeable about the risks, do not give much
conscious thought to these issues in their day-to-day
activities.

   At the same time, the committee acknowledges the concerns
of many law-abiding individuals about government surveillance.
It believes that such concerns and the questions they raise
about individual rights and government responsibilities must
be taken seriously. It would be inappropriate to dismiss such
individuals as paranoid or overly suspicious. Moreover, even
if only a minority is worried about government surveillance,
it is an important consideration, given the nation's history
as a democracy,(20) for determining whether and how access to
and use of cryptography may be considered a citizen's right
(Chapter 7).

----------

   (17) Some incidents that are often cited include the
surveillance of political dissidents, such as Martin Luther
King, Jr., Malcolm X, and the Student Non-Violent Coordinating
Committee in the mid to late 1960s; the activities of the
Nixon "plumbers" in the late 1960s, including the harassment
and surveillance of sitting and fommer govemment officials and
joumalists and their associates in the name of preventing
leaks of sensitive national security information; U.S.
intelligence surveillance of the intemational cable and
telephone communications of U.S. citizens from the early 1940s
through the early 1970s in support of FBI and other domestic
law enforcement agencies; and the creation of FBI dossiers on
opponents of the Vietnam War in the mid-1960s. The description
of these events is taken largely from Frank J. Donner, *The
Age of Surveillance*, Alfred A. Knopf, Inc., New York, 1980
(surveillance of political dissidents, pp. 244-248; plumbers,
pp. 248-252; FBI dossiers on antiwar protesters, pp. 252-256;
NSA surveillance, pp. 276-277.) Donner's book documents many
of these events. See also *Final Report of the Senate Select
Committee to Study Governmental Operations with respect to
Intelligence Activities*, Book II, April 26, 1974, U.S.
Govemment Printing Office, Washington, D.C., p. 12.

   (18) For example, at the 4th Conference on Computers,
Freedom, and Privacy in Chicago, Illinois, held in 1994, a
government speaker asked the audience if they were more
concerned about govemment abuse and harassment or about
criminal activity that might be directed at them. An
overwhelming majority of the audience indicated greater
concern about the first possibility. For recent accounts that
give the flavor of concerns about malfeasance by law
enforcement officials, see Ronald Smothers, "Atlanta Holds Six
Policemen In Crackdown," *New York Times*, September 7, 1995,
p. 9; George James, "Police Officer Is Arrested on Burglary
Charges in Sting Operation," *New York Times*, September 7,
1995, p. B-5; Kenneth B. Noble, "Many Complain of Bias in Los
Angeles Police," *New York Times*, September 4, 1995, p. 11;
Kevin Sack, "Racism of a Rogue Officer Casts Suspicion on
Police Nationwide," *New York Times*, September 4, 1995, p. 1;
Gordon Witkin, "When the Bad Guys are Cops," *U.S. News &
World Report*, September 11, 1995, p. 20; Barry Tarlow, "Doing
the Fuhrman Shuffle," *Washington Post*, August 27, 1995, p.
C-2; David W. Dunlap, "F.B.I. Kept Watch on AIDS Group During
Protest Years," *New York Times*, May 16, 1995, p. B3.

   (19) For example, a national Harris poll in January 1994
asked "Which type of invasions of privacy worry you the most
in America today -- activities of government agencies or
businesses?" Fifty-two percent said that government agencies
were their greater worry, while 40% selected business. See
Center for Social and Legal Research, *Privacy & American
Business*, Volume 1(3), Hackensack, New Jersey, 1994, p. 7.

   (20) Protecting communications from government surveillance
is a time-honored technique for defending against tyranny. A
most poignant example is the U.S. insistence in 1945 that the
postwar Japanese constitution include protection against
government surveillance of the communications of Japanese
citizens. In the aftermath of the Japanese surrender in World
War II, the United States drafted a constitution for Japan.
The initial U.S. draft contained a provision saying that "[n]o
censorship shall be maintained, nor shall the secrecy of any
means of communication be violated." The Japanese response to
this provision was a revised provision stating that "[t]he
secrecy of letter and other means of communication is
guaranteed to all of the people, provided that necessary
measures to be taken for the maintenance of public peace and
order, shall be provided by law." General Douglas MacArthur,
who was supervising the drafting of the new Japanese
constitution, insisted that the original provision regarding
communications secrecy and most other provisions of the
original U.S. draft be maintained. The Japanese agreed, this
time requesting only minor changes in the U.S. draft, and
accepting fully the original U.S. provision on communications
secrecy. See Osamu Nishi, *Ten Days Inside General
Headquarters (GHQ): How the Original Draft of the Japanese
Constitution Was Written in 1946*, Seibundo Publishing Co.
Ltd., Tokyo, 1989.

____________________________________________________________


               1.6 SPECIAL NEEDS OF GOVERNMENT

   Government encompasses many functions that generate or
depend on information, and current efforts to reduce the scope
and size of government depend heavily on information
technology. In many areas of government, the information and
information security needs resemble those of industry (see
Appendix I). Government also has important responsibilities
beyond those of industry, including those related to public
safety. For two of the most important and least understood in
detail, law enforcement and national security, the need for
strong information security has long been recognized.

   Domestic law enforcement authorities in our society have
two fundamental responsibilities: preventing crime and
prosecuting individuals that have committed crimes. Crimes
committed and prosecuted are more visible to the public than
crimes prevented (see Chapter 3).

   The following areas relevant to law enforcement require
high levels of information security:

   +    *Prevention of information theft from businesses and
individuals*, consistent with the transformation of economic
and social activities outlined above.

   +    *Tactical law enforcement communications*. Law
enforcement officials working in the field need secure
communications. At present, police scanners available at
retail electronics stores can monitor wireless communications
channels used by police; criminals eavesdropping on such
communications can receive advance warning of police
responding to crimes that they may be committing.

   +    *Efficient use by law enforcement officials of the
large amounts of information compiled on criminal activity*.
Getting the most use from such information implies that it be
remotely accessible and not be improperly modified (assuming
its accuracy and proper context, a requirement that in itself
leads to much controversy (21) ).

   +    *Reliable authentication of law enforcement
officials*. Criminals have been known to impersonate law
enforcement officials for nefarious purposes, and the
information age presents additional opportunities.

   In the domain of national security, traditional missions
involve protection against military threats originating from
other nation-states and directed against the interests of the
United States or its friends and allies. These traditional
missions require strong protection for vital information.

   +    U.S. military forces require secure communications.
Without cryptography and other information security
technologies in the hands of friendly forces, hostile forces
can monitor the operational plans of friendly forces to gain
an advantage.(22)

   +    Force planners must organize and coordinate flows of
supplies, personnel, and equipment. Such logistical
coordination involves databases whose integrity and
confidentiality as well as remote access must be maintained.

   +    Sensitive diplomatic communications between the United
States and its representatives or allies abroad. and/or
between critical elements of the U.S. government, must be
protected as part of the successful conduct of foreign
affairs, even in peacetime.(23)

   In addition, the traditional missions of national security
have expanded in recent years to include protection against
terrorists (24) and international criminals, especially drug
cartels.(25) Furthermore, recognition has been growing that in
an information age, economic security is part of national
security.

   More broadly, there is a practical convergence under way
among protection of individual liberties, public safety,
economic activity, and military security. For example, the
nation is beginning to realize that critical elements of the
U.S. civilian infrastructure -- including the banking system,
the air traffic control system, and the electric power grid --
must be protected against the threats described above, as must
the civilian information infrastructure that supports the
conduct of sensitive government communications. Because
civilian infrastructure provides a significant degree of
functionality on which the military and defense sector
depends, traditional national security interests are at stake
as well, and concerns have grown about the implications of
what has come to be known as information warfare (Box 1.9).
More generally, the need for more secure systems, updated
security policies, and effective procedural controls is taking
on truly nationwide dimensions.

----------

   (21) See for example, U.S. General Accounting Office,
*National Crime Information Center: Legislation Needed to
Deter Misuse of Criminal Justice Information*,
GAO/T-GGD-93-41, 1993.

   (22) For example, the compromise of the BLACK code used by
Allied military forces in World War Il enabled German forces
in Africa in 1942, led by General Erwin Rommel, to determine
the British order of battle (quantities, types, and locations
of forces), estimate British supply and morale problems, and
know the tactical plans of the British. For example, the
compromise of one particular message enabled Rommel to thwart
a critical British counterattack. In July of that year, the
British switched to a new code, thus denying Rommel an
important source of strategic intelligence. Rommel was thus
surprised at the Battle of Alamein, widely regarded as a
turning point in the conflict in the African theater. See
David Kahn, *The Codebreakers: The Story of Secret Writing*,
MacMillan, New York, 1967, pp. 472-477.

   (23) An agreement on Palestinian self-rule was reached in
September 1995. According to public reports, the parties
involved, Yasir Arafat (leader of the Palestinian Liberation
Organization) and Shimon Peres (then Foreign Minister of
Israel), depended heavily on the telephone efforts of Dennis
Ross, a U.S. negotiator, in mediating the negotiations that
led to the agreement. Obviously, in such circumstances, the
security of these telephone efforts was critical. See Steven
Greenhouse, "Twist to Shuttle Diplomacy: U.S. Aide Mediated by
Phone," *New York Times*, September 25, 1995, p. 1.

   (24) Terrorist threats generally emanate from
nongovernmental groups, though at times involving the tacit or
implicit (but publicly denied) support of sponsoring national
governments. Furthermore, the United States is regarded by
many parties as a particularly important target for political
reasons by virtue of its prominence in world affairs. Thus,
terrorists in confrontation with a U.S. ally may wish to make
a statement by attacking the United States directly rather
than its ally.

   (25) See. for example, Phil Williams, "Transnational
Criminal Organizations and International Security,"
*Survival*, Volume 36(1), Spring 1994, pp. 96-113.

____________________________________________________________


                          1.7 RECAP

   Chapter 1 underscores the need for attention to protecting
vital U.S. interests and values in an information age
characterized by a number of trends:

   +    The world economy is in the midst of a transition from
an industrial to an information age in which information
products are extensively bought and sold, information assets
provide leverage in undertaking business activities, and
communications assume evergreater significance in the lives of
ordinary citizens. At the same time, national economies are
increasingly interlinked across national borders, with the
result that international dimensions of public policy are
important.

   +    Trends in information technology suggest an
ever-increasing panoply of technologies and technology-enabled
services characterized by high degrees of heterogeneity,
enormous computing power, and large data storage and
transmission capabilities.

   +    Given the transition to a global information society
and trends in information technology, the future of
individuals and businesses alike is likely to be one in which
information of all types plays a central role. Electronic
commerce in particular is likely to become a fundamental
underpinning of the information future.

   +    Government has special needs for information security
that arise from its role in society, including the protection
of classified information and its responsibility for ensuring
the integrity of information assets on which the entire nation
depends.

   Collectively, these trends suggest that future needs for
information security will be large. Threats to information
security will emerge from a variety of different sources, and
they will affect the confidentiality and integrity of data and
the reliable authentication of users; these threats do and
will affect businesses, government, and private individuals.

   Chapter 2 describes how cryptography may help to address
all of these problems.

____________________________________________________________

        BOX 1.1 Communications and Computing Devices
                  and the Role of Software

   Communications and computing devices can be dedicated to a
single purpose or may serve multiple purposes. Dedicated
single-purpose devices are usually (though not always)
hardware devices whose functionality cannot be easily altered.
Examples include unprogrammable pocket calculators,
traditional telephones, walkie-talkies, pagers, fax machines,
and ordinary telephone answering machines.

   A multipurpose device is one whose functionality can be
altered by the end user. In some instances, a hardware device
may be "reprogrammed" to perform different functions simply by
the physical replacement of a single chip by another chip or
by the addition of a new circuit board. Open bus architectures
and standard hardware interfaces such as the PC Card are
intended to facilitate multipurpose functionality.

   Despite such interfaces and architectures for hardware,
software is the primary means for implementing multipurpose
functionality in a hardware device. With software, physical
replacement of a hardware component is unnecessary -- a new
software program is simply loaded and executed. Examples
include personal computers (which do word processing or
mathematical calculations, depending on what software the user
chooses to run), programmable calculators (which solve
different problems, depending on the programming given to
them), and even many modern telephones (which can be
programmed to execute functions such as speed dialing). In
these instances, the software is the medium in which the
expectations of the user are embedded.

   Today, the lines between hardware and software are
blurring. For example, some "hardware" devices are controlled
by programs stored in semi-permanent read-only memory.
"Read-only memory" (ROM) originally referred to memory for
storing instructions and data that could never be changed, but
this characteristic made ROM-controlled devices less flexible.
Thus, the electronics industry responded with "read-only"
memory whose contents take special effort to change (such as
exposing the memory chip to a burst of ultraviolet light or
sending only a particular signal to a particular pin on the
chip). The flexibility and cheapness of today's electronic
devices make them ubiquitous. Most homes now have dozens of
microprocessors in coffee makers, TVs, refrigerators, and
virtually anything that has a control panel.

____________________________________________________________

     BOX 1.2 An Attempted Electronic Theft from Citicorp

   Electronic money transfers are among the most closely
guarded activities in banking. In 1994, an international group
of criminals penetrated Citicorp's computerized electronic
transfer system and moved about $12 million from legitimate
customer accounts to their own accounts in banks around the
world. According to Citicorp, this is the first time its
computerized cash-management system has been breached.
Corporate customers access the system directly to transfer
funds for making investments, paying bills, and extending
loans, among other purposes. The Citicorp system moves about
$500 billion worldwide each day. Authority to access the
system is verified with a cryptographic code that only the
customer knows.

   The case began in June 1994, when Vladimir Levin of St.
Petersburg, Russia, allegedly accessed Citicorp computers in
New York through the international telephone network, posing
as one of Citicorp's customers. He moved some customer funds
to a bank account in Finland, where an accomplice withdrew the
money in person. In the next few months, Levin moved various
Citicorp customers' funds to accomplices' personal or business
accounts in banks in St. Petersburg, San Francisco, Tel Aviv,
Rotterdam, and Switzerland.

   Accomplices had withdrawn a total of about $400,000 by
August 1994. By that time, bank officials and their customers
were on alert. Citicorp detected subsequent transfers quickly
enough to warn the banks into which funds were moved to freeze
the destination accounts. (Bank officials noted they could
have blocked some of these transfers, but they permitted and
covertly monitored them as part of the effort to identify the
perpetrators.) Other perpetrators were arrested in Tel Aviv
and Rotterdam; they revealed that they were working with
someone in St. Petersburg. An examination of telephone-company
records in St. Petersburg showed that Citicorp computers had
been accessed through a telephone line at AO Saturn, a
software company. A person arrested after attempting to make
a withdrawal from a frozen account in San Francisco
subsequently identified Levin, who was an AO Saturn employee.
Russia has no extradition treaty with the United States;
however, Levin traveled to Britain in March 1995 and was
arrested there. As of September 1995, proceedings to extradite
him for trial in the United States were in progress.

   Levin allegedly penetrated Citicorp computers using
customers' user identifications and passwords. In each case,
Levin electronically impersonated a legitimate customer, such
as a bank or an investment capital firm. Some investigators
suspect that an accomplice inside Citicorp provided Levin with
necessary information; otherwise, it is unclear how he could
have succeeded in accessing customer accounts. He is believed
to have penetrated Citicorp's computers 40 times in all.
Citicorp says it has upgraded its system's security to prevent
future break-ins.

----------

SOURCES: William Carley and Timothy O'Brien, "Cyber Caper: How
Citicorp System Was Raided and Funds Moved Around World,"
*Wall Street Journal*, September 12, 1995, p. A-1; Saul
Hansell, "A $10 Million Lesson in the Risks of Electronic
Banking," *New York Times*, August 19, 1995, p. 31.

____________________________________________________________

 BOX 1.3 Vulnerabilities in Information Systems and Networks

   Information systems and networks can be subject to four
generic vulnerabilities:

   1.   Eavesdropping or data browsing. By surreptitiously
obtaining the confidential data of a company or by browsing a
sensitive file stored on a computer to which one has obtained
improper access, an adversary could be in a position to
undercut a company bid, learn company trade secrets (e.g.,
knowledge developed through proprietary company research) that
would eliminate a competitive advantage of the company, or
obtain the company's client list in order to steal customers.
Moreover, stealth is not always necessary for damage to occur
-- many companies would be damaged if their sensitive data
were disclosed, even if they knew that such a disclosure had
occurred.

   2.   Clandestine alteration of data. By altering a
company's data clandestinely, an adversary could destroy the
confidence of the company's customers in the company, disrupt
internal operations of the company, or subject the company to
shareholder litigation.

   3.   Spoofing. By illicitly posing as a company, an
adversary could place false orders for services, make
unauthorized commitments to customers, defraud clients, and
cause no end of public relations difficulties for the company.
Similarly, an adversary might pose as a legitimate customer,
and a company -- with an interest in being responsive to user
preferences to remain anonymous under a variety of
circumstances -- could then find itself handicapped in seeking
proper confirmation of the customer's identity.

   4.   Denial of service. By denying access to electronic
services, an adversary could shut down company operations,
especially time-critical ones. On a national scale, critical
infrastructures controlled by electronic networks (e.g., the
air traffic control system, the electrical power grid)
involving many systems linked to each other are particularly
sensitive.

____________________________________________________________

      BOX 1.4 Electronic Commerce and the Implications
                    for Interconnectivity

   A number of reports have addressed the potential nature and
impact of electronic commerce.(1) Out of such reports, several
common elements can be distilled:

   +    The interconnection of geographically dispersed units
into a "virtual" company.

   +    The linking of customers, vendors, and suppliers
through videoconferencing, electronic data interchange, and
electronic networks.

   +    The creation of temporary or more permanent strategic
alliances for business purposes.

   +    A vastly increased availability of information and
information products on line, both free and for a fee, that is
useful to individuals and organizations.

   +    The electronic transaction of retail business,
beginning with today's toll-free catalog shopping and
extending to electronic network applications that enable
customers to:

        --  apply for bank loans;

        --  order tangible merchandise (e.g., groceries)
            for later physical delivery;

        --  order intangible merchandise (e.g. music,
            movies) for electronic delivery;

        --  obtain information and electronic documents
            (e.g., official documents such as driver's
            licenses and birth certificates).

   +    The creation of a genuinely worldwide marketplace that
matches buyers to sellers largely without intermediaries.

   +    New business opportunities for small entrepreneurs
that could sell low-value products to the large numbers of
potential customers that an electronic marketplace might
reach.

   In general, visions of electronic commerce writ large
attempt to leverage the competitive edge that information
technologies can provide for commercial enterprises.
Originally used exclusively to facilitate internal
communications, information technology is now used by
corporations to connect directly with their suppliers and
business partners.(2) In the future, corporate networks will
extend all the way to customers, enabling improvements in
customer service and more direct channels for customer
feedback. Furthermore, information technologies will
facilitate the formation of ad hoc strategic alliances among
diverse enterprises and even among competitors on a short time
scale, driven by changes in business conditions that demand
prompt action. This entire set of activities is already well
under way.

   In the delivery of services, the more effective use and
transmission of information has had dramatic effects. Today's
air transportation system would not exist without rapid and
reliable information flows regarding air traffic control,
sales, marketing, maintenance, safety, and logistics planning.
Retailers and wholesalers depend on the rapid collection and
analysis of sales data to plan purchasing and marketing
activities, to offer more differentiated services to
customers, and to reduce operational costs. The insurance
industry depends on rapid and reliable information flows to
its sales force and to customize policies and manage risks.
(See Computer Science and Telecommunications Board, National
Research Council, *Information Technology in the Service
Society,: A Twenty-First Century Lever*, National Academy
Press, Washington, D.C., 1994.)

----------

   (1)  See for example, Cross-Industry Working Team,
*Electronic Cash, Tokens, and Payments in the National
Information Infrastructure*, Corporation for National Research
Initiatives, 1895 Preston White Drive, Suite 100, Reston,
Virginia 22091-5434 (Internet: info-xiwt@cnri.reston.va.us;
Tel: 703/620-8990), 1994; Office of Technology Assessment,
*Electronic Enterprises: Looking to the Future*, U.S.
Government Printing Office, Washington, D.C., July 1994.

   (2)  For example, in manufacturing, collaborative
information technologies can help to improve the quality of
designs and reduce the cost and time needed to revise designs;
product designers will be able to create a "virtual" product,
make extensive computer simulations of its behavior without
supplying all of its details, and "show" it to the customer
for rapid feedback. Networks will enable the entire
manufacturing enterprise to be integrated all along the supply
chain, from design shops to truck fleets that deliver the
finished products. (See Computer Science and
Telecommunications Board, National Research Council,
*Information Technology and Manufacturing: A Research Agenda*,
National Academy Press, Washington, D.C., 1995.)

____________________________________________________________

       BOX 1.5 Tensions Between Security and Openness

   Businesses have long been concerned about the tension
between openness and security. An environment that is open to
everyone is not secure, while an environment that is closed to
everyone is highly secure but not useful. A number of trends
in business today tend to exacerbate this conflict. For
example:

   +    Modern competitive strategies emphasize openness to
interactions with potential customers and suppliers. For
example, such strategies would demand that a bank present
itself as willing to do business with anyone, everywhere, and
at any time. However, such strategies also offer potential
adversaries a greater chance of success, because increasing
ease of access often facilitates the penetration of security
measures that may be taken.

   +    Many businesses today emphasize decentralized
management that pushes decision-making authority toward the
customer and away from the corporate hierarchy. Yet security
often has been (and is) approached from a centralized
perspective. (For example, access controls are necessarily
hierarchical (and thus centralized) if they are to be
maintained uniformly.)

   +    Many businesses rely increasingly on highly mobile
individuals. When key employees were tied to one physical
location, it made sense to base security on physical presence,
e.g., to have a user present a photo ID card to an operator at
the central corporate computer center. Today, mobile computing
and communications are common, with not even a physical wire
to ensure that the person claiming to be an authorized user is
accessing a computer from an authorized location or to prevent
passive eavesdropping on unencrypted transmissions with a
radio scanner.

____________________________________________________________

             BOX 1.6 International Dimensions of
                 Business and Commerce Today

   U.S. firms increasingly operate in a global environment,
obtaining goods and services from companies worldwide,
participating in global virtual corporations, and working as
part of international strategic alliances. One key dimension
of increasing globalization has been the dismantling of
barriers to trade and investment. In the past 40 years,
tariffs among developed countries have been reduced by more
than two-thirds. After the Uruguay Round reductions are
phased-in, tariffs in these countries will be under 4%, with
43% of current trade free of any customs duties.

   While tariffs of developing countries are at higher levels,
they have recently begun to decline substantially. After the
Uruguay Round, tariffs in these countries will average 12.3%
by agreement and will be even lower as a result of unilateral
reductions. In response to the reductions in trade barriers,
trade has grown rapidly. From 1950 to 1993, U.S. and world
trade grew at an average compound rate of 10% annually.

   Investment has also grown rapidly in recent years,
stimulated by the removal of restrictions and by international
rules that provide assurances to investors against
discriminatory or arbitrary treatment. U.S. foreign direct
investment also has grown at almost 10% annually during the
past 20 years and now totals about half a trillion dollars.
Foreign direct investment in the United States has risen even
faster over the same period -- at almost 19% annually -- and
now also totals almost $500 billion.

   The expansion of international trade and investment has
resulted in a much more integrated and interdependent world
economy. For the United States, this has meant a much greater
dependence on the outside world. More than a quarter of the
U.S. gross domestic product is now accounted for by trade in
goods and services and returns on foreign investment. Over 11
million jobs are now directly or indirectly related to our
merchandise trade.

   Because the U.S. economy is mature, the maintenance of a
satisfactory rate of economic growth requires that the United
States compete vigorously for international markets,
especially in the faster growing regions of the world. Many
sectors of our economy are now highly dependent on export
markets. This is particularly the case for, but is not limited
to, high-technology goods, as indicated in the table below.

   A second international dimension is the enormous growth in
recent years of multinational enterprises. Such firms operate
across national boundaries, frequently in multiple countries.
According to the 1993 World Investment Report of the United
Nations, transnational corporations (TNCs) with varying
degrees of integration account for about a third of the
world's private sector productive assets.

   The number of TNCs has more than tripled in the last 20
years. At the outset of this decade, about 37,000 U.S. firms
had a controlling equity interest in some 170,000 foreign
affiliates. This does not include nonequity relationships,
such as management contracts, subcontracting, franchising or
strategic alliances. There are some 300 TNCs based in the
United States and almost 15,000 foreign affiliates, of which
some 10,000 are nonbank enterprises.

   The strategies employed by TNCs vary among firms. They may
be based on trade in goods and services alone or, more often,
involve more complex patterns of integrated production,
outsourcing, and marketing. One measure of the extent of
integration by U.S. firms is illustrated by the U.S. Census
Bureau, which reported that in 1994, 46% of U.S. imports and
32% of U.S. exports were between related firms. Of U.S.
exports to Canada and Mexico, 44% were between related
parties; for the European Union and Japan, the share was 37%.

   With respect to imports, the shares of related-party
transactions were 75.5% for Japan, 47.2% for the European
Union, 44.6% for Canada and 69.2% for Mexico. Among those
sectors with the highest levels of interparty trade are data
processing equipment, including computers, and parts and
telecommunications equipment, ranging from 50% to 90%.

____________________________________________________________

                                          Exports As
   Area of Export                         a Percentage
                                          of U.S. Output
_____________________________________________________________

   Electronic computing and parts              52

   Semiconductors and related devices          47

   Magnetic and optical recording media
   (includes software products)                40

----------

SOURCE: U.S. Department of Commerce, Commerce News. August 9,
1995.

____________________________________________________________


                   BOX 1.7 Threat Sources

   +    *Foreign national agencies (including intelligence
services)*. Foreign intelligence operations target key U.S.
businesses. For example, two former directors of the French
intelligence service have confirmed publicly that the French
intelligence service collects economic intelligence
information, including classified government information and
information related to or associated with specific companies
of interest.(1) Foreign intelligence agencies may break into
facilities such as the foreign offices of a U.S. company or
the hotel suite of a U.S. executive and copy computer files
from within that facility (e.g., from a laptop computer in a
hotel room, a desktop computer connected to a network in an
office).(2) Having attained such access, they can also insert
malicious code that will enable future information theft.

   +    *Disgruntled or disloyal employees that work "from the
inside."* Such parties may collude with outside agents.
Threats involving insiders are particularly pernicious because
they are trusted with critical infommation that is not
available to outsiders. Such information is generally
necessary to understand the meaning of various data flows that
may have been intercepted, even when those data flows are
received in the clear.

   +    *Network hackers and electronic vandals* that are
having fun or making political statements through the
destruction of intellectual property without the intent of
theft. Information terrorists may threaten to bring down an
information network unless certain demands are met;
extortionists may threaten to bring down an information
network unless a ransom is paid. Disgruntled customers seeking
revenge on a company also fall into this category.

   +    *Thieves* attempting to steal money or resources from
businesses. Such individuals may be working for themselves or
acting as part of a larger conspiracy (e.g., in association
with organized crime). The spreading of electronic commerce
will increase the opportunities for new and different types of
fraud, as illustrated by the large increase in fraud seen as
the result of increased electronic filing to the Internal
Revenue Service. Even worse, customers traditionally regarded
as the first line of defense against fraud (because they check
their statements and alert the merchants or banks involved to
problems) may become adversaries as they seek to deny a
signature on a check or alter the amount of a transaction.

   It is difficult to know the prevalence of such threats,
because many companies do not discuss for the record specific
incidents of information theft. In some cases, they fear
stockholder ire and losses in customer confidence over
security breaches; in others, they are afraid of inspiring
"copy-cat" attacks or revealing security weaknesses. In still
other cases, they simply do not know that they have been the
victim of such theft. Finally, only a patchwork of state laws
applies to the theft of trade secrets and the like (and not
all states have such laws). There is no federal statute that
protects trade secrets or that address commercial information
theft, and federal authorities probing the theft of commercial
information must rely on proving violations of other statutes,
such as the wire and mail fraud laws, interstate transport of
stolen property, conspiracy, or computer fraud and abuse laws;
as a result, documentation of what would be a federal offense
if such a law were present is necessarily spotty. For all of
these reasons, what is known on the public record about
economic losses from information theft almost certainly
understates the true extent of the problem.

----------

   (1)  Two former directors of the DGSE (the French
intelligence service), have publicly stated that one of the
DGSE's top priorities was to collect economic intelligence.
During a September 1991 NBC news program, Pierre Marion,
former DGSE Director, revealed that he had initiated an
espionage program against US businesses for the purpose of
keeping France internationally competitive. Marion justified
these actions on the grounds that the United States and
France, although political and military allies, are economic
and technological competitors. During an interview in March
1993, then DGSE Director Charles Silberzahn stated that
political espionage was no longer a real priority for France
but that France was interested in economic intelligence, "a
field which is crucial to the world's evolution." Silberzahn
advised that the French had some success in economic
intelligence but stated that much work is still needed because
of the growing global economy. Silberzahn advised during a
subsequent interview that theft of classified information, as
well as information about large corporations, was a long-term
French Government policy. These statements were seemingly
corroborated by a DGSE targeting document prepared in late
1989 and leaked anonymously to the US Government and the press
in May 1993. It alleged that French intelligence had targeted
numerous US Government agencies and corporations to collect
economic and industrial information. Industry leaders such as
Boeing, General Dynamics, Hughes Aircraft, Lockheed, McDonnell
Douglas, and Martin Marietta all were on the list. Heading the
US Government listing was the Office of the US Trade
Representative.

   This unclassified paragraph can be found in the secret
version of the report, National Counterintelligence Center,
*Annual Report to Congress on Foreign Economic Collection and
Industrial Espionage*, Washington, D.C., July 1995.

   (2)  According to a report from the National Communications
System, countries that currently have significant intelligence
operations against the United States for national security
and/or economic purposes include Russia, the People's Republic
of China, Cuba, France, Taiwan, South Korea, India, Pakistan,
Israel, Syria, Iran, Iraq, and Libya. "All of the intelligence
organizations listed [above] have the capability to target
telecommunications and information systems for information or
clandestine attacks. The potential for exploitation of such
systems may be significantly larger." See National
Communications System (NCS), *The Electronic Intrusion Threat
to National Security and Emergency Preparedness
Telecommunications. An Awareness Document*,  2nd ed., NCS,
Alexandria, Va., December 5, 1994, pp. 2-20.

____________________________________________________________


        BOX 1.8 Vulnerability of the Public Switched
                 Telecommunications Network

   The nation's single most critical national-level component
of information infrastructure vulnerable to compromise is the
public switched telecommunications network (PSTN). The PSTN
provides information transport services for geographically
dispersed and national assets such as the banking system and
financial markets,(1) and the air traffic control system.(2)
Even the traditional military (3) is highly dependent on the
PSTN. Parties connected to the PSTN are therefore vulnerable
to failure of the PSTN itself and to attacks transmitted over
the PSTN.

   The fundamental characteristic of the PSTN from the
standpoint of information vulnerability is that it is a highly
interconnected network of heterogeneously controlled and
operated computer-based switching stations. Network
connectivity implies that an attacker -- which might range
from a foreign government to a teen-aged hacker -- can in
principle connect to any network site (including sites of
critical importance for the entire network) from any other
network site (which may be geographically remote and even
outside the United States).(4) The sites of critical
importance for the PSTN are the switching nodes that channel
the vast majority of telecommunications traffic in the United
States. Access to these critical nodes, and to other switching
facilities, is supposed to be limited to authorized personnel,
but in practice these nodes are often vulnerable to
penetration. Once in place on a critical node, hostile and
unauthorized users are in a position to disrupt the entire
network.

   The systemic vulnerabilities of the PSTN are the result of
many factors. One is the increasing accessibility of network
software to third parties other than the common carriers,
resulting from the Federal Communications Commission
requirement that the PSTN support open, equal access for
third-party providers of enhanced services as well as for the
common carriers; such accessibility offers intruders many
opportunities to capture user information, monitor traffic,
and remotely manipulate the network. A second reason is that
service providers are allowing customers more direct access to
network elements, in order to offer customer-definable
services such as call forwarding. A third reason is that
advanced services made possible by Signaling System 7 are
dependent on a common, out-of-band signaling system for
control of calls through a separate packet-switched data
network that adds to network vulnerability.(5) Finally,
space-based PSTN components (i.e., satellites) have few
control centers, are susceptible to electronic attack, and
generally do not encrypt their command channels, making the
systems vulnerable to hackers copying their commands and
disrupting service.(6) These conditions imply that the PSTN is
a system that would benefit from better protection of system
integrity and availability.

   Threats to the PSTN affect all national institutions whose
ability to function fully and properly depends on being able
to communicate, be it through telephony, data transmission,
video, or all of these. Indeed, many data networks operated
"privately" by large national corporations or national
institutions such as those described above are private only in
the sense that access is supposed to be limited to corporate
purposes; in fact, national institutions or corporations
generally use all forms of communications, including those
physically carried by the PSTN.(7) However, the physical and
computational infrastructure of these networks is in general
owned by the telecommunications service provider, and this
infrastructure is part of the larger PSTN infrastructure.
Thus, like the Internet, the "private" data network of a
national corporation, is in general not physically independent
of the PSTN. Similarly, it is dependence on the PSTN that has
led to failures in the air traffic control system and
important financial markets:

   +    In January 1991, the accidental severing of an AT&T
fiber-optic cable in Newark, New Jersey, led to the disruption
of FAA air traffic control communications in the
Boston-Washington corridor and the shutdown of the New York
Mercantile Exchange and several commodities exchanges. In May
1991, the severing of a fiber-optic cable led to the shutdown
of four of the Federal Aviation Administration's 20 major air
traffic control centers with "massive operational impact."(8)

   +    The 1991 failure of a PSTN component in New York
caused the loss of connectivity between a major securities
house and the Securities Industry Automation Corporation,
resulting in an inability to settle the day's trades over the
network.(9)

   Examples of small-scale activities by the computer
"underground" against the PSTN demonstrate capabilities that,
if coupled to an intent to wage serious information warfare
against the United States, pose a serious threat to the U.S.
information infrastructure:

   +    In 1990, several members of the Legion of Doom's
   Atlanta branch were charged with penetrating and disrupting
   telecommunications network elements. They were accused of
   planting "time bomb" programs in network elements in
   Denver, Atlanta, and New Jersey; these were designed to
   shut down major switching hubs, but were defused by
   telephone carriers before causing damage.(10)

   +    Members of a group known as MOD (various spell-outs)
   were indicted July 8, 1992, on 11 accounts. It is
   significant that they appear to have worked in a team.
   Among their alleged activities were developing and
   unleashing "programmed attacks" (see below) on telephone
   company computers and accessing telephone company computers
   to create new circuits and add services with no billing
   records."(11)

   +    Reported (but not well documented) is a growing
   incidence of "programmed attacks."(12) These have been
   detected in several networks and rely on customized
   software targeting specific types of computers or network
   elements. They are rarely destructive, but rather seek to
   add or modify services. "The capability illustrated by this
   category of attacks has not fully matured. However, if a
   coordinated attack using these types of tools were directed
   at the PSTN with a goal of disrupting national
   security/emergency preparedness (NS/EP) telecommunications,
   the result could be significant."(13) (The same point
   probably applies to the goal of disrupting other kinds of
   telecommunications beyond those used for NS/EP.)

   A number of reports and studies (14) have called attention
to the vulnerability of components of the national
telecommunications infrastructure.

----------

   (1)  These private networks for banking include Fedwire
(operated by the Federal Reserve banks), the Clearinghouse for
Interbank Payment Systems (CHIPS; operated by New York
Clearinghouse, an association of money center banks), the
Society for Worldwide Interbank Financial Telecommunication
(SWIFT; an intemational messaging system that carries
instructions for wire transfers between pairs of correspondent
banks), and the Automated Clearing House (ACH) systems for
domestic transfers, typically used for routine smaller
purchases and payments. In the 1980s, several U.S. banks
aggressively developed global networks with packet switches,
routers, and so on, to interconnect their local and wide area
networks; or, they used third-party service providers to
interconnect. In the 1990s, there are signs that U.S.
international banks are moving to greater use of carrier-
provided or hybrid networks because of the availability of
virtual private networks from carriers. Carrier-provided
networks are more efficient than networks built on top of
dedicated leased lines, because they can allocate demand
dynamically among multiple customers.

   (2)  The air traffic control system uses leased lines to
connect regional air traffic control centers.

   (3)  Over 95 percent of U.S. military and intelligence
community voice and data communications are carried over
facilities owned by public carriers. (See Joint Security
Commission, *Redefining Security: A Report to the Secretary of
Defense and the Director of Central Intelligence*, February
28, 1994, Chapter 8.) Of course, the 95% figure includes some
non-critical military communications; however, only 30 percent
of the telecommunications networks that would be used during
wartime operate in the classified environment (and are
presumably more secure), while the other 70 percent are based
on the use of unclassified facilities of public carriers. See
Richard Powers, *Information Warfare: A CSI Special Report*,
Computer Security Institute, Washington, D.C., Fall 1995.

   (4)  Clifford Stoll, *The Cuckoo's Egg*, Pocket Books, New
York, 1989.

   (5)  National Research Council, *Growing Vulnerability of
the Public Switched Networks: Implications for National
Security and Emergency Preparedness*, National Academy Press,
Washington, D.C., 1989), page 36; Reliability and
Vulnerability Working Group, Telecommunications Policy
Committee, Information Infrastructure Task Force, *Reliability
and Vulnerability of the NII: Capability Assessments*, from
the National Communications Svstem home page on WWW,
http://64.117.147.223/nc-ia/html.

   (6)  Reliability and Vulnerability Working Group,
Telecommunications Policy Committee, Information
Infrastructure Task Force, *Reliability and Vulnerability of
the NII: Capability Assessments*, from the National
Communications System home paoe on WWW,
http://164.117.147.223/nc-ia/html.

   (7)  Both shared circuits and private networks are expected
to grow dramatically in the next several years. See for
example, Michael Csenger, "Private lines dead? Don't buy those
flowers just yet," *Network World*, May 1, 1995, p. 1.

   (8)  *Software Engineering Notes*, Volume 17, January 1992,
as cited in Peter J. Neumann, *Computer Related Risks*,
Addison-Wesley, New York, 1995, p. 17.

   (9)  See Office of Technology Assessment, U.S. Congress,
*U.S. Banks and International Telecommunications -- Background
Paper*, OTA-BP-TCT-100, U.S. Government Printing Office,
Washington, D.C., September 1992, pp. 32-,3.

   (10) National Communications System (NCS), *The Electronic
Intrusion Threat to National Security and Emergency
Preparedness Telecommunications: An Awareness Document*, 2nd
ed., NCS, Alexandria, Va., December 5, 1994, p. 2-5.

   (11) NCS, *The Electronic Intrusion Threat to National
Security and Emergency Preparedness Telecommunications*, 1994,
pp. 2-8 to 2-9.

   (12) NCS, *The Electronic Intrusion Threat to National
Security and Emergency Preparedness Telecommunications*, 1994,
p. 2-6.

   (13) NCS, *The Electronic Intrusion Threat to National
Security and Emergency Preparedness Telecommunications*, 1994,
p. 2-6.

   (14) Joint Security Commission, *Redefining Security: A
Report to the Secretary of Defense and the Director of Central
Intelligence*, Washington, D.C., February 28, 1994; National
Research Council, *Growing Vulnerability of the Public
Switched Networks: Implications for National Security and
Emergency Preparedness*, National Academy Press, Washington,
D.C., 1989; NCS, *The Electronic Intrusion Threat to National
Security and Emergency Preparedness Telecommunications*, 1994;
Reliability and Vulnerability Working Group,
Telecommunications Policy Committee, Information
Infrastructure Task Force, *Reliability and Vulnerability of
the NII: Capability Assessments*, from the National
Communications System home page on WWW,
http://164.117.147.223/nc-ia/html.

____________________________________________________________


                 BOX 1.9 Information Warfare

             "Information warfare" is a term used in many different
ways. Of most utility for this report is the definition of
information warfare (IW) as hostile action that targets the
information systems and information infrastructure of an
opponent (i.e., offensive actions that attack an opponent's
communications, weapon systems, command and control systems,
intelligence systems, information components of the civil and
societal infrastructure such as the power grid and banking
system) coupled with simultaneous actions seeking to protect
U.S. and allied systems and infrastructure from such attacks.
Other looser uses of the term information warfare" include the
following:

   +    The use of information and tactical intelligence to
apply weapon systems more effectively. IW may be used in
connection with information-based suppression of enemy air
defenses or "smart" weapons using sensor data to minimize the
volume of ordnance needed to destroy a target.

   +    The targeting of companies' information systems for IW
attacks. As industrial espionage spreads and/or international
competitiveness drives multinational corporations into
military-like escapades, the underlying notion of
information-based probing of and attack on a competitor's
information secrets could take on a flavor of intergovernment
military or intelligence activities.

   +    The fight against terrorism, organized crime, and even
street crime, which might be characterized as IW to the extent
that information about these subjects is used to prosecute the
battle. This usage is not widespread, although it may develop
in the future.

   Usage of the term has shifted somewhat as federal agencies,
notably the Department of Defense, struggle to fully
appreciate this new domain of warfare (or low-intensity
conflict) and to create relevant policy and doctrine for it.
Conversely, there is some discussion of the vulnerabilities of
the U.S. civil information infrastructure to such offense. The
ranoe of activities that can take place in information warfare
is broad:

   +    Physical destruction of information-handling
facilities to destroy or degrade functionality;

   +    Denial of use of an opponent's important information
systems;

   +    Degradation of effectiveness (e.g., accuracy, speed of
response) of an opponent's information systems;

   +    Insertion of spurious, incorrect, or otherwise
misleading data into an opponent's information systems (e.g.,
to destroy or modify data, or to subvert software processes
via improper data inputs);

   +    Withdrawal of significant tactical or strategic data
from an opponent's information systems;

   +    Insertion of malicious software into an opponent's
system to affect its intended behavior in various ways, and
perhaps, to do so at a time controlled by the aggressor; and

   +    Subversion of an opponent's software and/or hardware
installation to make it an in-place selfreporting mole for
intelligence purposes.

   As an operational activity, information warfare is clearly
related closely to, but yet distinct from, intelligence
functions that are largely analytical. IW is also related to
information security, since its techniques are pertinent both
to prosecutisn of offensive IW and to protection for defensive
IW.

____________________________________________________________

[End Chapter 1]
____________________________________________________________









[Head note all pages: May 30, 1996, Prepublication Copy
Subject to Further Editorial Correction]


                              2

                        Cryptography:
              Roles, Market, and Infrastructure


   Cryptography is a technology that can play important roles
in addressing certain types of information vulnerability,
although it is not sufficient to deal with all threats to
information security. As a technology, cryptography is
embedded into products that are purchased by a large number of
users; thus, it is important to examine various aspects of the
market for cryptography. Chapter 2 describes cryptography as
a technology used in products, as a product within a larger
market context, and with reference to the infrastructure
needed to support its large-scale use.


                 2.1 CRYPTOGRAPHY IN CONTEXT

   Computer-system security, and its extension network
security, are intended to achieve many purposes. Among them
are safeguarding physical assets from damage or destruction
and ensuring that resources such as computer time, network
connections, and access to databases are available only to
individuals -- or to other systems or even software processes
-- authorized to have them.(1) Overall information security is
dependent on many factors, including various technical
safeguards, trustworthy and capable personnel, high degrees of
physical security, competent administrative oversight, and
good operational procedures. Of the available technical
safeguards, cryptography has been one of the least utilized to
date.(2)

   In general, the many security safeguards in a system or
network not only fulfill their principal task but also act
collectively to mutually protect one another. In particular,
the protection or operational functionality that can be
afforded by the various cryptographic safeguards treated in
this report will inevitably require that the hardware or
software in question be embedded in a secure environment. To
do otherwise is to risk that the cryptography might be
circumvented, subverted, or misused -- hence leading to a
weakening or collapse of its intended protection.

   As individual starld-alone computer systems have been
incorporated into ever larger networks (e.g., local-area
networks, wide-area networks, the Internet), the requirements
for cryptographic safeguards have also increased. For example,
users of the earliest computer systems were almost always
clustered in one place and could be personally recognized as
authorized individuals, and communications associated with a
computer system usually were contained within a single
building. Today, users of computer systems can be connected
with one another worldwide, through the public switched
telecommunications network, a local area network, satellites,
microwave towers, and radio transmitters. Operationally, an
individual or a software process in one place can request
service from a system or a software process in a far distant
place. Connectivity among systems is impromptu and occurs on
demand; the Internet has demonstrated how to achieve it. Thus,
it is now imperative for users and systems to identify
themselves to one another with a high degree of certainty and
for distant systems to know with certainty what privileges for
accessing databases or software processes a remote request
brings. Protection that could once be obtained by geographic
propinquity and personal recognition of users must now be
provided electronically and with extremely high levels of
certainty.

----------

   (1)  The terms "information security" or shortened versions
such as INFOSEC, COMPSEC, and NETSEC are also in use.

   (2)  Other safeguards, in particular software safeguards,
are addressed in various standard texts and reports. See, for
example, National Institute of Standards and Technology, *An
Introduction to Computer Security*, NIST Special Publication
800-12, Department of Commerce, October 1995; *Trusted
Computer System Evaluation Criteria*, Department of Defense,
August 15, 1983; Computer Science and Telecommunications Board
(CSTB), National Research Council, *Computers at Risk: Safe
Computing in the Information Age*, National Academy Press,
Washington, D.C., 1991.

____________________________________________________________


        2.2 WHAT IS CRYPTOGRAPHY AND WHAT CAN IT DO?

   The word "cryptography" is derived from Greek words that
mean secret writing. Historically, cryptography has been used
to hide information from access by unauthorized parties,
especially during communications when it would be most
vulnerable to interception. By preserving the secrecy, or
confidentiality, of information, cryptography has played a
very important role over the centuries in military and
national affairs.(3)

   In the traditional application of cryptography for
confidentiality, an originator (the first party) creates a
message intended for a recipient (the second party), protects
(encrypts) it by a cryptographic process, and transmits it as
ciphertext. The receiving party decrypls the received
ciphertext message to reveal its true content, the plaintext.
Anyone else (the third party) who wishes undetected and
unauthorized access to the message must penetrate (by
cryptanalysis) the protection afforded by the cryptographic
process.

   In the classical use of cryptography to protect
communications, it is necessary that both the originator and
recipient(s) have common knowledge of the cryptographic
process (the algorithm or cryptographic algorithm) and that
both share a secret common element -- typically, the key or
cryptographic key, which is a piece of information, not a
material object. In the encryption process, the algorithm
transforms the plaintext into the ciphertext, using a
particular key, the use of a different key results in a
different ciphertext. In the decryption process, the algorithm
transforms the ciphertext into the plaintext, using the key
that was used to encrypt (4) the original plaintext. Such a
scheme, in which both communicating parties must have a common
key, is now called *symmetric cryptography* or *secret-key
cryptography*; it is the kind that has been used for centuries
and written about widely.(5) It has the property, usually an
operational disadvantage, of requiring a safe method of
distributing keys to relevant parties (*key distribution* or
*key management*).

   It can be awkward to arrange for symmetric and secret keys
to be available to all parties with whom one might wish to
communicate, especially when the list of parties is large.
However, a scheme called *asymmetric cryptography* (or,
equivalently, *public-key cryptography*), developed in the
mid-1970s, helps to mitigate many of these difficulties
through the use of different keys for encryption and
decryption.(6) Each participant actually has two keys. The
public key is published, is freely available to anyone, and is
used for encryption; the private key is held in secrecy by the
user and is used for decryption.(7) Because the two keys are
inverses, knowledge of the public key enables the derivation
of the private key in theory. However, in a well-designed
public-key system, it is computationally infeasible in any
reasonable length of time to derive the private key from
knowledge of the public key.

   A significant operational difference between symmetric and
asymmetric cryptography is that with asymmetric cryptography
anyone who knows a given person's public key can send a secure
message to that person. With symmetric cryptography, only a
selected set of people (those who know the private key) can
communicate. While it is not mathematically provable, all
known asymmetric cryptographic systems are slower than their
symmetric cryptographic counterparts, and the more public
nature of asymmetric systems lends credence to the belief that
this will always be true. Generally, symmetric cryptography is
used when a large amount of data needs to be encrypted or when
the encryption must be done within a given time period;
asymmetric cryptography is used for short messages, for
example, to protect key distribution for a symmetric
cryptographic system.

   Regardless of the particular approach taken, the
applications of cryptography have gone beyond its historical
roots as secret writing; today, cryptography serves as a
powerful tool in support of system security. Cryptography can
provide many useful capabilities:

   +    *Confidentiality* -- the characteristic that
information is protected from being viewed in transit during
communications and/or when stored in an information system.
With cryptographically provided confidentiality, encrypted
information can fall into the hands of someone not authorized
to view it without being compromised. It is almost entirely
the confidentiality aspect of cryptography that has posed
public policy dilemmas.

   The other capabilities, described below, can be considered
collectively as nonconfidentiality or collateral uses of
cryptography:

   +    *Authentication* -- cryptographically based assurance
that an asserted identity is valid for a given person (or
computer system). With such assurance, it is difficult for an
unauthorized party to impersonate an authorized one.

   +    *Integrity check* -- cryptographically based assurance
that a message or computer file has not been tampered with or
altered.(8) With such assurance, it is difficult for an
unauthorized party to alter data.

   +    *Digital signature* -- cryptographically based
assurance that a message or file was sent or created by a
given person. A digital signature cryptographically binds the
identity of a person with the contents of the message or file,
thus providing nonrepudiation -- the inability to deny the
authenticity of the message or file. The capability for
nonrepudiation results from encrypting the digest (or the
message or file itself) with the private key of the signer.
Anyone can verify the signature of the message or file by
decrypting the signature using the public key of the sender.
Since only the sender should know his or her own private key,
assurance is provided that the signature is valid and the
sender cannot later repudiate the message. If a person
divulges his or her private key to any other party, that party
can impersonate the person in all electronic transactions.

   +    *Digital date/time stamp* -- cryptographically based
assurance that a message or file was sent or created at a
given date and time. Generally, such assurance is provided by
an authoritative organization that appends a date/time stamp
and digitally signs the message or file.

   These cryptographic capabilities can be used in
complementary ways. For example, authentication is basic to
controlling access to system or network resources. For
example, a person may use a password to authenticate his own
identity; only when the proper password has been entered will
the system allow the user to "log on" and obtain access to
files, email, and so on.(9) But passwords have many
limitations as an access control measure (e.g., people tell
others their passwords or a password is learned via
eavesdropping), and cryptographic authentication techniques
can provide much better and more effective mechanisms for
limiting system or resource access to authorized parties.

   Access controls can be applied at many different points
within a system. For example, the use of a dial-in port on an
information system or network can require the use of
cryptographic access controls to ensure that only the proper
parties can use the system or network at all. Many systems and
networks accord privileges or access to resources depending on
the specific identity of a user; thus, a hospital information
system may grant physicians access that allows entering orders
for patient treatment, whereas laboratory technicians may not
have such access. Authentication mechanisms can also be used
to generate an audit trail identifying those who have accessed
particular data, thus facilitating a search for those known to
have compromised confidential data.

   In the event that access controls are successfully
bypassed, the use of encryption on data stored and
communicated in a system provides an extra layer of
protection. Specifically, if an intruder is denied easy access
to stored files and communications, he may well find it much
more difficult to understand the internal workings of the
system and thus be less capable of causing damage or reading
the contents of encrypted inactive data files that may hold
sensitive information. Of course, when an application opens a
data file for processing, that data is necessarily unencrypted
and is vulnerable to an intruder that might be present at that
time.

   Authentication and access control can also help to protect
the privacy of data stored on a system or network. For
example, a particular database application storing data files
in a specific format could allow its users to view those
files. If the access control mechanisms are set up in such a
way that only certain parties can access that particular
database application, access to the database files in question
can be limited, and thus the privacy of data stored in those
databases protected. On the other hand, an unauthorized user
may be able to obtain access to those files through a
different, uncontrolled application, or even through the
operating system itself. Thus, encryption of those files is
necessary to protect them against such "back-door" access.(10)

   The various cryptographic capabilities described above may
be used within a system in order to accomplish a set of tasks.
For example, a banking system may require confidentiality and
integrity assurances on its communications links,
authentication assurances for all major processing functions,
and integrity and authentication assurances for high-value
transactions. On the other hand, merchants may need only
digital signatures and date/time stamps when dealing with
external customers or cooperating banks when establishing
contracts. Furthermore, depending on the type of capability to
be provided, the underlying cryptographic algorithms may or
may not be different.

   Finally, when considering what cryptography can do, it is
worth making two practical observations. First, the initial
deployment of any technology often brings out unanticipated
problems, simply because the products and artifacts embodying
that technology have not had the benefit of successive cycles
of failure and repair. Similarly, human procedures and
practices have not been tested against the demands of
real-life experience. Cryptography is unlikely to be any
different, and so it is probable that early large-scale
deployments of cryptography will exhibit exploitable
vulnerabilities.(11)

   The second point is that against a determined opponent that
is highly motivated to gain unauthorized access to data, the
use of cryptography may well simply lead that opponent to
exploit some other vulnerability in the system or network on
which the relevant data is communicated or stored, and such an
exploitation may well be successful. But the use of
cryptography can help to raise the cost of gaining improper
access to data and may prevent a resource-poor opponent from
being successful at all.

   More discussion of cryptography can be found in Appendix C.

----------

   (3)  The classic work on the history of cryptography is
David Kahn, *The Codebreakers*, MacMillan, New York, 1967.

   (4)  This report uses the term "encrypt" to describe the
act of using an encryption algorithm with a given key to
transform one block of data, usually plaintext, into another
block, usually ciphertext.

   (5)  Historical perspective is provided in David Kahn,
*Kahn on Codes*, MacMillan, New York, 1983; F.W. Winterbotham,
*The Ultra Secret*, Harper & Row, New York, 1974; and Ronald
Lewin, *Ultra Goes to War*, Hutchinson & Co., London, 1978. A
classic reference on the fundamentals of cryptography is
Dorothy Denning, *Cryptography and Data Security*,
Addison-Wesley, Reading, Mass., 1982.

   (6)  Gustavus J. Simmons (ed.), *Contemporary Cryptology.
The Science of Information Integrity*, IEEE Press, Piscataway,
New Jersey, 1992; Whitfield Diffie, "The First Ten Years of
Public-Key Cryptography," *Proceedings of the IEEE*, Vol. 76,
1988, pp. 560-577.

   (7)  The seminal paper on public-key cryptography is
Whitfield Diffie and Martin Hellman, "New Directions in
Cryptography," *IEEE Transactions on Information Theory*,
Volume IT-22, 1976, pp. 644-654.

   (8)  Digital signatures and integrity checks use a
condensed form of a message or file -- called a digest --
which is created by passing the message or file through a
one-way hash function. The digest is of fixed length and is
independent of the size of the message or file. The hash
function is designed to make it highly unlikely that different
messages (or files) will yield the same digest, and to make it
computationally very difficult to modify a message (or file)
but retain the same digest.

   (9)  An example more familiar to many is that the entry of
an appropriate personal identification number into an
automatic teller machine (ATM) gives the ATM user access to
account balances or cash.

   (10) The measure-countermeasure game can continue
indefinitely. In response to file encryption, an intruder can
insert into an operating system a Trojan horse program that
waits for an authorized user to access the encrypted database.
Since the user is authorized, the database will allow the
decryption of the relevant file and the intruder can simply
"piggy-back" on that decryption. Thus, those responsible for
system security must provide a way to check for Trojan horses,
and so the battle goes round.

   (11) For a discussion of this point, see Ross Anderson,
"Why Cryptosystems Fail," *Communications of the ACM*, Volume
37(11), November, 1994, pp. 32-40.

____________________________________________________________


   2.3 HOW CRYPTOGRAPHY FITS INTO THE BIG SECURITY PICTURE

   In the context of confidentiality, the essence of
information security is a battle between information
protectors and information interceptors. Protectors -- who may
be motivated by "good" reasons (if they are legitimate
businesses) or "bad" reasons (if they are criminals) -- wish
to restrict access to information to a group that they select.
Interceptors -- who may also be motivated by "bad" reasons (if
they are unethical business competitors) or "good" reasons (if
they are law enforcement agents investigating serious crimes)
-- wish to obtain access to the information being protected
whether or not they have the permission of the information
protectors. It is this dilemma that is at the heart of the
public policy controversy and is addressed in greater detail
in Chapter 3.

   From the perspective of the information interceptor,
encryption is only one of the problems to be faced. In
general, the complexity of today's information systems poses
many technical barriers (Section 2.3.1). On the other hand,
the information interceptor may be able to exploit product
features or specialized techniques to gain access (Section
2.3.2).


             2.3.1 Technical Factors Inhibiting
                 Access to Information (12)

   Compared to the task of tapping an analog telephone line,
obtaining access to the content of a digital information
stream can be quite difficult. With analog "listening"
(traditional telephony or radio interception), the technical
challenge is obtaining access to the communications channel.
When communications are digitized, gaining access to the
charmel is only the first step: one must then unravel the
digital format, a task that can be computationally very
complex. Furthermore, the complexity of the digital format
tends to increases over time, because more advanced
information technology generally implies increased
functionality and a need for more efficient use of available
communications capacity.

   Increased complexity is reflected in particular in the
interpretation of the digital stream that two systems might
use to communicate with each other or the format of a file
that a system might use to store data. Consider, for example,
one particular sequence of actions used to cormnunicate
information. The original application in the sending system
might have started with a plaintext message, and then
compressed it (to make it smaller); encrypted it (to conceal
its meaning); and appended error-control bits to the
compressed, encrypted message (to prevent errors from creeping
in during transmission).(13) Thus, a party attempting to
intercept a communication between the sender and the receiver
could be faced with a data stream that would represent the
combined output of many different operations that transform
the data stream in some way. The interceptor would have to
know the error-control scheme and the decompression algorithms
as well as the key and the algorithm used to encrypt the
message.

   When an interceptor moves onto the lines that carry bulk
traffic, isolating the bits associated with a particular
communication of interest is itself quite difficult.(14) A
high-bandwidth line (e.g., a long-haul fiber-optic cable)
typically carries hundreds or thousands of different
communications; any given message may be broken into distinct
packets and intermingled with other packets from other
contemporaneously operating applications.(15) The traffic on
the line may be encrypted "in bulk" by the line provider, thus
providing an additional layer of protection against the
interceptor. Moreover, since a message traveling from point A
to point B may well be broken into packets that traverse
different physical paths en route, an interceptor at any given
point in between A and B may not even see all of the packets
pass by.

   Another factor inhibiting access to information is the use
of technologies that facilitate anonymous communications. For
the most part, intercepted communications are worthless if the
identity of the communicating parties is not known. In
telephony, call forwarding and pager callbacks from pay
telephones have sometimes frustrated the efforts of law
enforcement officials conducting wiretaps. In data
communications, so-called anonymous remailers can strip out
all identifying information from an Internet e-mail message
sent from person A to person B in such a way that person B
does not know the identity of person A. Some remailers even
support return communications from person B to person A
without the need for person B to know the identity of person
A.

   Access is made more difficult because an information
protector can switch communications from one medium to another
very easily without changing end-user equipment. Some forms of
media may be easily accessed by an interceptor (e.g.
conventional radio), whereas other forms may be much more
challenging (e.g. fiber-optic cable, spread-spectrum radio).
The proliferation of different media that can interoperate
smoothly even at the device level will continue to complicate
the interceptor's attempts to gain access to communications.

   Finally, obtaining access also becomes more difficult as
the number of service providers increases (Box 2.1). In the
days when AT&T held a monopoly on voice communications and
criminal communications could generally be assumed to be
carried on AT&T-operated lines, law enforcement and national
security authorities needed only one point of contact with
whom to work. As the telecommunications industry becomes
increasingly heterogenous, law enforcement authorities may
well be uncertain about what company to approach about
implementing a wiretap request.

----------

   (12) This section addresses technical factors that inhibit
access to information. But technical measures are only one
class of techniques that can be used to improve information
security. For example, statutory measures can help contribute
to information security. Laws that impose criminal penalties
for unauthorized access to computer systems have been used to
prosecute intruders. Such laws are intended to deter attacks
on information systems, and to the extent that individuals do
not exhibit such behavior, system security is enhanced.

   (13) Error control is a technique used both to detect
errors in transmission and sometimes to correct them as well.

   (14) This point is made independently in a report that came
to the attention of the committee as this report was going to
press. A staff study of the Permanent Select Committee on
Intelligence, House of Representatives concluded that "the
ability to filter through the huge volumes of data and to
extract the information from the layers of formatting,
multiplexing, compression, and transmission protocols applied
to each message is the biggest challenge of the future,
[while] increasing amounts and sophisitication of encryption
add another layer of complexity." *IC21 Intelligence Community
in the 21st Century*, p. 121.

   (15) Paul Haskell and David G. Messerschmitt, "In Favor of
an Enhanced Network Interface for Multimedia Services,"
submitted to *IEEE Multimedia Magazine*.

____________________________________________________________


      2.3.2 Factors Facilitating Access to Information


System or Product Design

   Unauthorized access to protected information can
inadvertently be facilitated by product or system features
that are intended to provide legitimate access but instead
create unintentional loopholes or weaknesses that can be
exploited by an interceptor. Such points of access that may be
deliberately incorporated into product or system designs
include the following:

   +    *Maintenance and monitoring ports*.(16) For example,
many telephone switches and computer systems have dial-in
ports that are intended to facilitate monitoring and remote
maintenance and repair by off-site technicians.

   +    *Master keys*. A product can have a single master key
that allows its possessor to decrypt all ciphertext produced
by the product.

   +    *Mechanisms for key escrow or key backup*. A third
party, for example, may store an extra copy of a private key
or a master key. Under appropriate circumstances, the third
party releases the key to the appropriate individual(s), who
is (are) then able to decrypt the ciphertext in question. This
subject is discussed at length in Chapter 5.

   +    *Weak encryption defaults*. A product capable of
providing very strong encryption may be designed in such a way
that users invoke those capabilities only infrequently. For
example, encryption on a secure telephone may be designed so
that the use of encryption depends on the user pressing a
button at the start of a telephone call. The requirement to
press a button to invoke encryption is an example of a weak
default, because the telephone could be designed so that
encryption is invoked automatically when a call is initiated;
when weak defaults are designed into systems, many users will
forget to press the button.

   Despite the good reasons for designing systems and products
with these various points of access (e.g., facilitating remote
access through maintenance ports to eliminate travel costs of
system engineers), any such point of access can be exploited
by unauthorized individuals as well.


Methods Facilitating Access to Information

   Surreptitious access to communications can also be gained
by methods such as the following:

   +    *Interception in the ether*. Many point-to-point
communications make use of a wireless (usually radio) link at
some point in the process. Since it is impossible to ensure
that a radio broadcast reaches only its intended receiver(s),
communications carried over wireless links -- such as those
involving cellular telephones and personal pagers -- are
vulnerable to interception by unauthorized parties.

   +    *Use of pen registers*. Telephone communications
involve both the content of a call and call-setup information
such as numbers called, originating number, time and length of
call and so on. Setup information is often easily accessible,
some of it even to end users.

   +    *Wiretapping*. To obtain the contents of a call
carried exclusively by nonwireless means, the information
carried on a circuit (actually, a replica of the information)
is sent to a monitoring station. A call can be wiretapped when
an eavesdropper picks up an extension on the same line, hooks
up a pair of alligator clips to the right set of terminals, or
obtains the cooperation of telephone company officials in
monitoring a given call at a chosen location.

   +    *Exploitation of related data*. A great deal of useful
information can be obtained by examining in detail a digital
stream that is associated with a given communication. For
example, people have developed communications protocol
analyzers that examine traffic as it flows by a given point
for passwords and other sensitive information.

   +    *Reverse engineering*. Decompilation or disassembly of
software can yield deep understanding of how that software
works. One implication is that any algorithm built into
software cannot be assumed to be secret for very long, since
disassembly of the software will inevitably reveal it to a
technically trained individual.

   +    *Cryptanalysis* (discussed in greater detail in
Appendix C). Cryptanalysis is the task of recovering the
plaintext corresponding to a given ciphertext without
knowledge of the decrypting key. Successful cryptanalysis can
be the result of:

        --  *Inadequately-sized keys*. A product with
            encryption capabilities that implements a
            strong cryptographic algorithm with an
            inadequately sized key is vulnerable to a
            "brute-force" attack.(18) Box 2.2 provides
            more detail.

        --  *Weak encryption algorithms or poorly designed
            products*. Some encryption algorithms and
            products have weaknesses that, if known to an
            attacker, require the testing of only a small
            fraction of the keys that could in principle
            be the proper key.

   +    *Product penetration*. Like weak encryption, certain
design choices such as limits on the maximum size of a
password, the lack of a reasonable lower bound on the size of
a password, or use of a random-number generator that is not
truly random may lead to a product that presents a work factor
for an attacker that is much smaller than the theoretical
strength implied by the algorithm it uses.(19)

   +    *Monitoring of electronic emissions*. Most electronic
communications devices emit electromagnetic radiation that is
highly correlated with the information carried or displayed on
them. For example, the contents of an unshielded computer
display or terminal can in principle be read from a distance
(estimates range from tens of meters to hundreds of meters) by
equipment specially designed to do so. Coined by a U.S.
government program, TEMPEST is the name of a class of
techniques to safeguard against monitoring of emissions.

   +    *Device penetration*. A software-controlled device can
be penetrated in a number of ways. For example, a virus may
infect it, making a clandestine change. A message or a file
can be sent to an unwary recipient who activates a hidden
program when the message is read or the file is opened; such
a program, once active, can record the keystrokes of the
person at the keyboard, scan the mass storage media for
sensitive data and transmit it, or make clandestine
alterations to stored data.

   +    *Infrastructure penetration*. The infrastructure used
to carry communications is often based on software-controlled
devices such as routers. Router sohware can be modified as
described above to copy and forward all (or selected) traffic
to an unauthorized interceptor.

   The last two techniques can be categorized as invasive,
because they alter the operating environment in order to
gather or modify information. In a network environment, the
most common mechanisms of invasive attacks are called viruses
and Trojan horses. A virus gains access to a system, hides
within that system, and replicates itself to infect other
systems. A Trojan horse exploits a weakness from within a
system. Either approach can result in intentional or
unintentional denial of services for the host system.(20)
Modern techniques for combining both techniques to covertly
exfiltrate data from a system are becoming increasingly
powerful and difficult to detect.(21) Such attacks will gain
in popularity as networks become more highly interconnected.

----------

   (16) A port is a point of connection to a given information
system to which another party (another system, an individual)
can connect.

   (17) "Caller ID," a feature that identifies the number of
the calling party, makes use of call-setup information carried
on the circuit.

   (18) A brute-force attack against an encryption algorithm
is a computer-based test of all possible keys for that
algorithm undertaken in an effort to discover the key that
actually has been used. Hence, the difficulty and time to
complete such attacks increase markedly as the key length
grows (specifically, the time doubles for every bit added to
the key length).

   (19) Work factor is used in this report to mean a measure
of the difficulty of undertaking a brute-force test of all
possible keys against a given ciphertext (and known
algorithm). A 40-bit work factor means that a brute-force
attack must test at most 2^40 keys to be certain that the
corresponding plaintext message is retrieved. In the
literature, the term "work factor" is also used to mean the
ratio of work needed for brute-force cryptanalysis of an
encrypted message to the work needed to encrypt that message.

   (20) On November 2, 1988, Robert T. Morris, Jr., released
a "worm" program that spread itself throughout the Internet
over the course of the next day. At trial, Morris maintained
that he had not intended to cause the effects that had
resulted, a belief held by many in the Internet community.
Morris was convicted on a felony count of unauthorized access.
See Peter G. Neumann, *Computer Related Risks*, Addison
Wesley, Reading, Mass., 1995, p. 133.

   (21) The popular World Wide Web provides an environment in
which an intruder can act to steal data. For example, an
industrial spy wishing to obtain data stored on the
information network of a large aerospace company can set up a
Web page containing information of interest to engineers at
the aerospace company (e.g., information on foreign aerospace
business contracts in the making), thereby making the page an
attractive site for those engineers to visit through the Web.
Once an engineer from the company has visited the spy's Web
page, a channel is set up by which the Web page could send
back a Trojan horse (TH) program for execution on the
workstation being used to look at the page. The TH could be
passed as part of any executable program (Java and Postscript
provide two such vehicles) that otherwise did useful things
but on the side collected data resident on that workstation
(and any other computers to which it might be connected). Once
the data was obtained, it could be sent back to the spy's Web
page during the same session, or e-mailed back, or sent during
the next session used to connect to that Web page.
Furthermore, because contacts with a Web page by design
provide the specific address from which the contact is coming,
the TH could be sent only to the aerospace company (and to no
one else), thus reducing the likelihood that anyone else would
stumble upon it. Furthermore, the Web page contact also
provides information about the workstation that is making the
contact, thus permitting a customized and specially debugged
TH to be sent to that workstation.

____________________________________________________________


               2.4 THE MARKET FOR CRYPTOGRAPHY


   Cryptography is a product as well as a technology. Products
offering cryptographic capabilities can be divided into two
general classes:

   +    *Security-specific or stand-alone* products that are
generally add-on items (often hardware, but sometimes
software) and often require that users perform an
operationally separate action to invoke the encryption
capabilities. Examples include an add-on hardware board that
encrypts messages or a program that accepts a plaintext file
as input and generates a ciphertext file as output.

   +    *Integrated* (often "general-purpose") products in
which cryptographic functions have been incorporated into some
software or hardware application package as part of its
overall functionality. An integrated product is designed to
provide a capability that is useful in its own right, as well
as encryption capabilities that a user may or may not use.
Examples include a modem with on-board encryption or a word
processor with an option for protecting (encrypting) files
with passwords.(22)

   In addition, an integrated product may provide sockets or
hooks to user-supplied modules or components that offer
additional cryptographic functionality. An example is a
software product that can call upon a user-supplied package
that performs certain types of file manipulation such as
encryption or file compression. Cryptographic sockets are
discussed in Chapter 7 as cryptographic applications
programming interfaces.

   A product with cryptographic capabilities can be designed
to provide data confidentiality, data integrity, and user
authentication in any combination; a given commercial
cryptographic product may implement functionality for any or
all of these capabilities. For example, a PC-Card may
integrate cryptographic functionality for secure
authentication and for encryption onto the same piece of
hardware, even though the user may choose to invoke these
functions independently. A groupware program for remote
collaboration may implement cryptography for confidentiality
(by encrypting messages sent between users) and cryptography
for data integrity and user authentication (by appending a
digital signature to all messages sent between users).
Further, this program may be implemented in a way that these
features can operate independently (either, both, or neither
may be operative at the same time).

   Because cryptography is usable only when it is incorporated
into a product, whether integrated or security-specific,
issues of supply and demand affect the use of cryptography.
The remainder of this section addresses both demand and supply
perspectives on the cryptography market.

----------

   (22) From a system design perspective, it is reasonable to
assert that word processing and database applications do not
have an intrinsic requirement for encryption capabilities and
that such capabilities could be better provided by the
operating system on which these applications operate. But as
a practical matter, operating systems often do not provide
such capabilities, and so vendors have significant incentives
to provide encryption capabilities that are useful to
customers who want better security.

____________________________________________________________


      2.4.1 The Demand Side of the Cryptography Market

   Chapter 1 discussed vulnerabilities that put the
information assets of businesses and individuals at risk. But
despite the presence of such risks, many organizations do not
undertake adequate information security efforts, whether those
efforts involve cryptography or any other tool. This section
explores some of the reasons for this behavior.


Lack of Security Awareness (and/or Need)

   Most people who use electronic communications behave as
though they regard their electronic communications as
confidential. Even though they may know in some sense that
their communications are vulnerable to compromise, they fail
to take precautions to prevent breaches in communications
security. Even criminals aware that they may be the subjects
of wiretaps have been overheard by law enforcement officials
to say, "This call is probably being wiretapped, but ... ,"
after which they go on to discuss incriminating topics.(23)

   The impetus for thinking seriously about security is
usually an event that is widely publicized and significant in
impact.(24) An example of responding to publicized problems is
the recent demand for encryption of cellular telephone
communications. In the past several years, the public has been
made aware of a number of instances in which traffic carried
over cellular telephones was monitored by unauthorized parties
(Appendix J). In addition, cellular telephone companies have
suffered enormous financial losses as the result of "cloning,"
an illegal practice in which the unencrypted ID numbers of
cellular telephones are recorded off the air and placed into
cloned units, thereby allowing the owner of the cloned unit to
masquerade as the legitimate user.(25) Even though many users
today are aware of such practices and have altered their
behavior somewhat (e.g., by avoiding discussion of sensitive
information over cellular telephone lines), more secure
systems such as GSM (the European standard for mobile
telephones) have gained only a minimal foothold in the U.S.
market.

   A second area in which people have become more sensitive to
the need for information security is in international
commerce. Many international business users are concerned that
their international business communications are being
monitored, and indeed such concerns motivate a considerable
amount of today's demand for secure communications.

   It is true that the content of the vast majority of
telephone communications in the United States (e.g., making a
dinner date, taking an ordinary business call) and data
communications (e.g., transferring a file from one computer to
another, sending an e-mail message) is simply not valuable
enough to attract the interest of most eavesdroppers.
Moreover, most communications links for point-to-point
communications in the United States are hard wired (e.g.,
fiber-optic cable) rather than wireless (e.g., microwave);
hardwired links are much more secure than wireless links.26 In
some instances, compromises of information security do not
directly damage the interests of the persons involved. For
example, an individual whose credit card number is improperly
used by another party (who may have stolen his wallet or
eavesdropped on a conversation) is protected by a legal cap on
the liability for which he is responsible.

---------

   (23) A case in point is that the officers charged in the
Rodney King beating used their electronic communications
system as though it were a private telephone line, even though
they had been warned that all traffic over that system was
recorded. In 1992, Rodney King was beaten by members of the
Los Angeles Police Department. A number of transcripts of
police radio conversations describing the incident were
introduced as evidence at the trial. Had they been fully
cognizant at the moment of the fact that all conversations
were being recorded as a matter of department policy, the
police officers in question most likely would not have said
what they did. Personal communication, Sara Kiesler, Carnegie
Mellon University, 1993.

   (24) It is widely believed that only a few percent of
computer break-ins are detected. See for example, Jane Bird,
"Hunting Down the Hackers," *Management Today*, July, 1994, p.
64 (reports that 1% of attacks are detected); Bob Brewin,
"Info Warfare Goes on Attack," *Federal Computer Week*, Volume
9(31), October 23, 1995, p. 1 (reports 2% detection); and Gary
Anthes, "Hackers Try New Tacks", *ComputerWorld*, January 30,
1995, p. 12 (reports 5% detection).

   (25) See for example, Bryan Miller, "Web of Cellular Phone
Fraud Widens," *New York Times*, July 20, 1995, p. C-1; and
George James, "3 Men Accused of Stealing Cellular Phone ID
Numbers," *New York Times*, October 19, 1995, p. B-3.

____________________________________________________________


Other Barriers Influencing Demand for Cryptography

   Even when a user is aware that communications security is
threatened and wishes to take action to forestall the threat,
a number of practical considerations can affect the decision
to use cryptographic protection. These considerations include
the following:

   +    *Lack of critical mass*. A secure telephone is not of
much use if only one person has it. Ensuring that
communications are secure requires collective action -- some
critical mass of interoperable devices is necessary in order
to stimulate demand for secure communications. To date, such
a critical mass has not yet been achieved.

   +    *Uncertainties over government policy*. Policy often
has an impact on demand. A number of government policy
decisions on cryptography have introduced uncertainty, fear,
and doubt into the marketplace and have made it difficult for
potential users to plan for the future. Seeing the controversy
surrounding policy in this area, potential vendors are
reluctant to bring to market products that support security,
and potential users are reluctant to consider products for
security that may become obsolete in the future in an unstable
legal and regulatory environment.

   +    *Lack of a supporting infrastructure*. The mere
availability of devices is not necessarily sufficient. For
some applications such as secure interpersonal communications,
a national or international infrastructure for managing and
exchanging keys could be necessary. Without such an
infrastructure, encryption may remain a niche feature that is
usable only through ad hoc methods replicating some of the
functions that an infrastructure would provide and for which
demand would thus be limited. Section 2.5 describes some
infrastructure issues in greater detail.

   +    *High cost*. To date, hardware-based cryptographic
security has been relatively expensive, in part, becaus of the
high cost of stand-alone products made in relatively small
numbers. A user that initially deploys a system without
security features and subsequently wants to add them can be
faced with a very high cost barrier, and consequently there is
a limited market for security add-on products.

   On the other hand, the marginal cost of implementing
cryptographic capabilities in software at the outset is
rapidly becoming a minor part of the overall cost, and so
cryptographic capabilities are likely to appear in all manner
and types of integrated software products where there might be
a need.

   +    *Reduced performance*. The implementation of
cryptographic functions often consumes computational resources
(e.g., time, memory). In some cases, excessive consumption of
resources makes encryption too slow or forces the user to
purchase additional memory. If encrypting the communications
link over which a conversation is carried delays that
conversation by more than a few tenths of a second, users may
well choose not to use the encryption capability.

   +    *A generally insecure environment*. A given network or
operating system may be so inherently insecure that the
addition of cryptographic capabilities would do little to
improve overall security. Moreover, retrofitting security
measures atop an inherently insecure system is generally
difficult.

   +    *Usability*. A product's usability is a critical
factor in its market acceptability. Products with encryption
capabilities that are available for use but are in fact unused
do not increase information security. Such products may be
purchased but not used for the encryption they provide because
such use is too inconvenient in practice, or they may not be
purchased at all because the capabilities they provide are not
aligned well with the needs of their users. In general, the
need to undertake even a modest amount of extra work or to
tolerate even a modest inconvenience for cryptographic
protection that is not directly related to the primary
function of the device is likely to discourage the use of such
protection.(27) When cryptographic features are well
integrated in a way that does not demand case-bycase user
intervention, i.e., when such capabilities can be invoked
transparently to the average user, demand may well increase.

   +    *Lack of independent certification or evaluation of
products*. Certification of a product's quality is often
sought by potential buyers who lack the technical expertise to
evaluate product quality or who are trying to support certain
required levels of security (e.g., as the result of bank
regulations). Many potential users are also unable to detect
failures in the operation of such products.(28) With one
exception discussed in Chapter 6, independent certification
for products with integrated encryption capabilities is not
available, leading to market uncertainty about such products.

   +    *Electronic commerce*. An environment in which secure
communications were an essential requirement would do much to
increase the demand for cryptographic security.(29) However,
the demand for secure communications is currently nascent.

   +    *Uncertainties arising from intellectual property
issues*. Many of the algorithms that are useful in
cryptography (especially public-key cryptography) are
protected by patents. Some vendors are confused by the fear,
uncertainty, and doubt caused by existing legal arguments
among patent holders. Moreover, even when a patent on a
particular algorithm is undisputed, many users may resist its
use because they do not wish to pay the royalties.(30)

   +    *Lack of interoperability and standards*. For
cryptographic devices to be useful, they must be
interoperable. In some instances, the implementation of
cryptography can affect the compatibility of systems that may
have interoperated even though they did not conform strictly
to interoperability standards. In other instances, the
specific cryptographic algorithm used is yet another function
that must be standardized in order for two products to
interoperate. Nevertheless, an algorithm is only one piece of
a cryptographic device, and so two devices that implement the
same cryptographic algorithm may still not interoperate.(31)
Only when two devices conform fully to a single
interoperability standard (e.g., a standard that would specify
how keys are to be exchanged, the formatting of the various
data streams, the algorithms to be used for encryption and
decryption, and so on) can they be expected to interoperate
seamlessly.

   An approach gaining favor among product developers is
protocol negotiation,(32) which calls for two devices or
products to mutually negotiate the protocol that they will use
to exchange information. For example, the calling device may
query the receiving device to determine the right protocol to
use. Such an approach frees a device from having to conform to
a single standard and also facilitates the upgrading of
standards in a backward-compatible manner.

   +    *The heterogeneity of the communications
infrastructure*. Communications are ubiquitous, but they are
implemented through a patchwork of systems and technologies
and communications protocols rather than according to a single
integrated design. In some instances, they do not conform
completely to the standards that would enable full
interoperability. In other instances, interoperability is
achieved by intermediate conversion from one data format to
another. The result can be that transmission of encrypted data
across interfaces interferes with achieving connectivity among
disparate systems. Under these circumstances, users may be
faced with a choice of using unencrypted communications or not
being able to communicate with a particular other party at
all.(33)

----------

   (26) A major U.S. manufacturer reported to the committee
that in the late 1980s, it was alerted by the U.S. government
that its microwave communications were vulnerable. In
response, this manufacturer took steps to increase the
capacity of its terrestrial communication links, thereby
reducing its dependence on microwave communications. A similar
situation was faced by IBM in the 1970s. See William Broad,
"Evading the Soviet Ear at Glen Cove," *Science*, Volume
217(3), 1982, pp. 910-911.

   (27) For example, experience with current secure telephones
such as the STU-III suggests that users of such phones may be
tempted, because of the need to contact many people, to use
them in a nonsecure mode more often than not.

   (28) Even users who do buy security products may still be
unsatisfied with them. For example, in two consecutive surveys
in 1993 and 1994, a group of users reported spending more and
being less satisfied with the security products they were
buying. See Dave Powell, "Annual Infosecurity Industry
Survey," *Infosecurity News*, March/April, 1995, pp. 20-27.

   (29) AT&T plans to take a non-technological approach to
solving some of the security problems associated with retail
Internet commerce. AT&T has announced that it will insure its
credit-card customers against unauthorized charges, as long as
those customers were using AT&T's service to connect to the
Internet. This action was taken on the theory that the real
issue for consumers is the fear of unauthorized charges,
rather than fears that confidential data per se would be
compromised. See Thomas Weber, "AT&T Will Insure Its Card
Customers on Its Web Service," *Wall Street Journal*, February
7, 1996, pp. B-5.

   (30) See for example, James Bennett, "The Key to Universal
Encryption," *Strategic Investment*, December 20, 1995, pp.
12-13.

   (31) Consider the Data Encryption Standard (DES) as an
example. DES is a symmetric encryption algorithm, first
published in 1975 by the U.S. govemment, that specifies a
unique and well-defined transformation when given a specific
56-bit key and a block of text, but the various details of
operation within which DES is implemented can lead to
incompatibilities with other systems that include DES, with
stand-alone devices incorporating DES, and even with
software-implemented DES.

   Specifically, how the infommation is prepared prior to
being encrypted (e.g., how it is blocked into chunks) and
after the encryption (how the encrypted data is modulated on
the communications line) will affect the interoperability of
communications devices that may both use DES. In addition, key
management may not be identical for DES-based devices
developed independently. DES-based systems for file encryption
generally require a user-generated password to generate the
appropriate 56-bit DES key, but since the DES standard does
not specify how this aspect of key management is to be
performed, the same password used on two independently
developed DES-based systems may not result in the same 56-bit
key. For these and similar reasons, independently developed
DES-based systems cannot necessarily be expected to
interoperate.

   (32) Transmitting a digital bit stream requires that the
hardware carrying that stream be able to interpret it.
Interpretation means that regardless of the content of the
communications (e.g., voice, pictures), the hardware must know
what part of the bit stream represents information useful to
the ultimate receiver and what part represents information
useful to the carrier. A communications protocol is an
agreed-upon convention about how to interpret any given bit
stream and includes the specification of any encryption
algorithm that may be used as part of that protocol.

   (33) An analogous example is the fact that two Internet
users may find it very difficult to use e-mail to transport a
binary file between them, because the e-mail systems on either
end may well implement standards for handling binary files
differently, even though they may conform to all relevant
standards for carrying ASCII text.

____________________________________________________________


      2.4.2 The Supply Side of the Cryptography Market

   The supply of products with encryption capabilities is
inherently related to the demand for them. Cryptographic
products result from decisions made by potential vendors and
users as well as standards determined by industry and/or
government. Use depends on availability as well as other
important factors such as user motivation, relevant learning
curves, and other nontechnical issues. As a general rule, the
availability of products to users depends on decisions made by
vendors to build or not to build them, and all of the
considerations faced by vendors of all type of products are
relevant to products with encryption capabilities.

   In addition to user demand, vendors need to consider the
following issues before deciding to develop and market a
product with encryption capabilities:

   +    *Accessibility of the basic knowledge underlying
cryptography*. Given that various books, technical articles,
and government standards on the subject of cryptography have
been published widely over the past 20 years, the basic
knowledge needed to design and implement cryptographic systems
that can frustrate the best attempts of anyone (including
government intelligence agencies) to penetrate them is
available to government and nongovernment agencies and parties
both here and abroad. For example, because a complete
description of DES is available worldwide, it is relatively
easy for anyone to develop and implement an encryption system
that involves multiple uses of DES to achieve much stronger
security than that provided by DES alone.

   +    *The skill to implement basic knowledge of
cryptography*. A product with encryption capabilities involves
much more than a cryptographic algorithm. An algorithm must be
implemented in a system, and many design decisions affect the
quality of a product even if its algorithm is mathematically
sound. Indeed, efforts by multiple parties to develop products
with encryption capabilities based on the same algorithm could
result in a variety of manufactured products with varying
levels of quality and resistance to attack.

   For example, although cryptographic protocols are not part
and parcel of a cryptographic algorithm per se, these
protocols specify how critical aspects of a product will
operate. Thus, weaknesses in cryptographic protocols -- such
as a key generation protocol specifying how to generate and
exchange a specific encryption key for a given message to be
passed between two parties or a key distribution protocol
specifing how keys are to be distributed to users of a given
product can compromise the confidentiality that a real product
actually provides, even though the cryptographic algorithm and
its implementation are flawless.(34)

   +    *The skill to integrate the cryptography into a usable
product*. Even a product that implements a strong
cryptographic algorithm in a competent manner is not valuable
if the product is unusable in other ways. For integrated
products with encryption capabilities, the noncryptographic
functions of the product are central, because the primary
purpose of an integrated product is to provide some useful
capability to the user (e.g., word processing, database
management, communications) that does not involve cryptography
per se; if cryptography interferes with this primary
functionality, it detracts from the product's value.

   In this area, U.S. software vendors and system integrators
have distinct strengths, (35) even though engineering talent
and cryptographic expertise are not limited to the United
States. For example, foreign vendors do not market integrated
products with encryption capabilities that are sold as
mass-market software, whereas many such U.S. products are
available.(36)

   +    *The cost of developing maintaining, and upgrading an
economically viable product with encryption capabilities*. The
technical aspects of good encryption are increasingly well
understood. As a result, the incremental cost of designing a
software product so that it can provide cryptographic
functionality to end users is relatively small. As cost
barriers to the inclusion of cryptographic functionality are
reduced dramatically, the long-term likelihood increases that
most products that process digital information will include
some kinds of cryptographic functionality.

   +    *The suitability of hardware vs. software* as a medium
in which to implement a product with encryption capabilities.
The duplication and distribution costs for software are very
low compared to those for hardware, and yet, trade secrets
embedded in proprietary hardware are easier to keep than those
included in software. Moreover, software cryptographic
functions are more easily disabled.

   +    *Nonmarket considerations and export controls*.
Vendors may withhold or alter their products at government
request. For example, a well-documented instance is the fact
that AT&T voluntarily deferred the introduction of its 3600
Secure Telephone Unit (STU) at the behest of government (see
Appendix E on the history of current cryptography policy and
Chapter 6 on government influence.) Export controls also
affect decisions to make products available even for domestic
use, as described in Chapter 4.

----------

   (34) An incident that demonstrates the importance of the
nonalgorithm aspects of a product is the failure of the
key-generation process for the Netscape Navigator Web browser
that was discovered in 1995; a faulty random number generation
used in the generation of keys would enable an intruder
exploiting this flaw to limit a brute-force search to a much
smaller number of keys than would generally be required by the
40-bit key length used in this product. See John Markoff,
"Security Flaw Is Discovered in Software Used in Shopping,"
*New York Times*, September 19, 1995, p. A1. A detailed
discussion of protocol failures can be found in Gustavus
Simmons, "Cryptanalysis and Protocol Failures,"
*Communications of the ACM*, Volume 37(11), 1994, pp. 56-65.

   (35) Computer Science and Telecommunications Board (CSTB),
National Research Council, *Keeping the U.S. Computer Industry
Competitive: Systems Integration*, National Academy Press,
Washington, D.C., 1992.

   (36) For example, the Department of Commerce and the
National Security Agency found no general-purpose software
products with encryption capability from non-U.S.
manufacturers. See Department of Commerce and National
Security Agency, *A Study of the International Market for
Computer Software with Encryption*, January 11, 1996, p.
III-9.

____________________________________________________________


    2.5 INFRASTRUCTURE FOR WIDESPREAD USE OF CRYPTOGRAPHY


   The widespread use of cryptography requires a support
infrastructure that can service organizational or individual
user needs with regard to cryptographic keys.


             2.5.1 Key Management Infrastructure

   In general, to enable use of cryptography across an
enterprise, there must be a mechanism that:

   +    Periodically supplies all participating locations with
keys (typically designated for use during a given calendar or
time period -- the crypto-period) for either stored materials
or communications; or

   +    Permits any given location to generate keys for itself
as needed (e.g., to protect stored files); or

   +    Can securely generate and transmit keys among
communicating parties (e.g., for data transmissions, telephone
conversations).

   In the most general case, any given location will have to
perform all three functions. With symmetric systems, the
movement of keys from place to place obviously must be done
securely and with a level of protection adequate to counter
the threats of concern to the using parties. Whatever the
distribution system, it clearly must protect the keys with
appropriate safeguards and must be prepared to identify and
authenticate the source. The overall task of securely assuring
availability of keys for symmetric applications is often
called key management.

   If all secure communications take place within the same
corporation or among locations under a common line of
authority, key management is an internal or possibly a joint
obligation. For parties that communicate occasionally or
across organizational boundaries, mutual arrangements must be
formulated for managing keys. One possibility might be a
separate trusted entity whose line of business could be to
supply keys of specified length and format, on demand and for
a fee.

   With asymmetric systems, the private keys are usually
self-generated, but they may also be generated from a central
source, such as a corporate security office. In all cases,
however, the handling of private keys is the same for
symmetric and asymmetric systems, they must be guarded with
the highest levels of security. Although public keys need not
be kept secret, their integrity and association with a given
user are extremely important and should also be supported with
extremely robust measures.

   The costs of a key management infrastructure for national
use are not known at this time. One benchmark figure is that
the cost of the Defense Department infrastructure needed to
generate and distribute keys for approximately 320,000 STU-III
telephone users is somewhere in the range of $10 million to
$13 million per year.(37)

----------

   (37) William Crowell, deputy director, National Security
Agency, personal communication, April 1995.

____________________________________________________________


              2.5.2 Certificate Infrastructures

   The association between key information (such as the name
of a person and the related public key) and an individual or
organization is an extremely important aspect of a
cryptographic system. That is, it is undesirable for one
person to be able to impersonate another. To guard against
impersonation, two general types of solutions have emerged: an
organization-centric approach consisting of certificate
authorities and a user-centric approach consisting of a web of
trust.

   A certificate authority serves to validate information that
is associated with a known individual or organization.
Certificate authorities can exist within a single
organization, across multiple related organizations, or across
society in general. Any number of certificate authorities can
coexist, and they may or may not have agreements for
crosscertification, whereby if one authority certifies a given
person, then another authority will accept that certification
within its own structure. Certificate authority hierarchies
are defined in the Internet RFCs 1421-1424, the X.509
standard, and other emerging commercial standards, such as
that proposed by MasterCar/Visa. A number of private
certificate authorities, such as VeriSign, have also begun
operation to service secure massmarket software products, such
as the Netscape Navigator Web browser.

   Among personal acquaintances validation of public keys can
be passed along from person to person or organization to
organization, thus creating a web of trust in which the entire
ensemble is considered to be trusted based on many individual
instances of trust. Such a chain of trust can be established
between immediate parties, or from one party to a second to
establish the credentials of a third. This approach has been
made popular by the Pretty Good Privacy (PGP) software
product; all users maintain their own "key-ring," which holds
the public keys of everyone with whom they want to
communicate.

   Importantly, it should be noted that both the certificate
authority approach and the web of trust approach replicate the
pattern of trust that already exists among participating
parties in societal and business activities. In a sense, the
certificate infrastructure for cryptography simply formalizes
and makes explicit what society and its institutions are
already accustomed to.

   At some point, banks, corporations, and other organizations
already generally trusted by society will start to issue
certificates. At that time, individuals especially may begin
to feel more comfortable about the cryptographic undergirding
of society's electronic infrastructure, at which point the
webs of trust can be expected to evolve according to
individual choices and market forces. However, it should be
noted that different certificates will be used for different
functions, and it is unlikely that a single universal
certificate infrastructure will satisfy all societal and
business needs. For example, because an infrastructure
designed to support electronic commerce and banking may do no
more than identify valid purchasers, it may not be useful for
providing interpersonal communication or corporate access
control.

   Certificate authorities already exist within some
businesses, especially those that have moved vigorously into
an electronic way of life. Generally, there is no sense of a
need for a legal framework to establish relationships among
organizations, each of which operates its own certificate
function. Arrangements exist for them to cross-certify one
another; in general, the individual(s) authorizing the
arrangement will be a senior officer of the corporation, and
the decision will be based on the existence of other legal
agreements already in place, notably, contracts that define
the relationships and obligation among organizations.

   For the general business world in which any individual or
organization wishes to conduct a transaction with any other
individual or organization, such as the sale of a house, a
formal certificate infrastructure has yet to be created. There
is not even one to support just a digital signature
application within government. Hence, it remains to be seen
how, in the general case, individuals and organizations will
make the transition to an electronic society.

   Certificate authorities currently operate within the
framework of contractual law. That is, if some problem arises
as the result of improper actions on the part of the
certification authority, its subscribers would have to pursue
a civil complaint. As certificate authorities grow in size and
service a greater part of society, it will probably be
necessary to regulate their actions under law, much like those
of any major societal institutions.(38) It is interesting to
observe that the legal and operational environment that will
have to exist for certificate organizations involves the same
set of issues that are pertinent to escrow organizations (as
discussed in Chapter 5).

----------

   (38) Shimshon Berkovits et al., *Public Key Infrastructure
Study: Final Report*, National Institute of Standards and
Technology, Gaithersburg, Maryland, April 1994. Performed
under contract to MITRE, this study is summarized in Appendix
H.

____________________________________________________________


                          2.6 RECAP


   Cryptography provides important capabilities that can help
deal with the vulnerabilities of electronic information.
Cryptography can help to assure the integrity of data, to
authenticate the identity of specific parties, to prevent
individuals from plausibly denying that they have signed
something, and to preserve the confidentiality of information
that may have improperly come into the possession of
unauthorized parties. At the same time, cryptography is not a
silver bullet, and many technical and human factors other than
cryptography can improve or detract from information security.
In order to preserve information security, attention must be
given to all of these factors. Moreover, people can use
cryptography only to the extent that it is incorporated into
real products and systems; unimplemented cryptographic
algorithms cannot contribute to information security. Many
factors other than raw mathematical knowledge contribute to
the supply of and demand for products with cryptographic
functionality. Most importantly, the following aspects
influence the demand for cryptographic functions in products:

   +    Critical mass in the marketplace,

   +    Government policy,

   +    Supporting infrastructure,

   +    Cost,

   +    Performance,

   +    Overall security environment,

   +    Usability,

   +    Quality certification and evaluation, and

   +    Interoperability standards.

Finally, any large-scale use of cryptography, with or without
key escrow (discussed later in Chapter 5), depends on the
existence of a substantial supporting infrastructure, the
deployment of which raises a different set of problems and
issues.

____________________________________________________________

  BOX 2.1 The Evolution of the Telecommunications Industry

   Prior to 1984, the U.S. telecommunications industry was
dominated by one primary player -- AT&T. An elaborate
regulatory structure had evolved in the preceding decades to
govern what had become an essential national service on which
private citizens, govemment, and business had come to rely.

   By contrast, the watchword in telecommunications a mere
decade later has become competition. AT&T is still a major
player in the field, but the regional Bell operating companies
(RBOCs), separated from AT&T as part of the divestiture
decision of 1984, operate entirely independently, providing
local services. Indeed, the current mood in Congress toward
deregulation is already causing increasingly active
competition and confrontation among all of the players
involved, including cable TV companies, cellular and mobile
telephone companies, the long-distance telecommunications
companies (AT&T, MCI, Sprint, and hundreds of others), the
RBOCs and other local exchange providers, TV and radio
broadcast companies, entertainment companies, and satellite
communications companies. Today, all of these players compete
for a share of the telecommunications pie in the same
geographic area; even railroads and gas companies (which own
geographic rights of way along which transmission lines can be
laid) and power companies (which have wires going to every
house) have dreams of profiting from the telecommunications
boom. The playing field is even further complicated by the
fact of reselling -- institutions often buy telecommunications
services from "primary" providers in bulk to serve their own
needs and resell the excess to other customers.

   In short, today's telecommunications industry is highly
heterogeneous and widely deployed with multiple public and
private service providers, and will become more so in the
future.

____________________________________________________________

       BOX 2.2 Fundamentals of Cryptographic Strength

   Cryptographic strength depends on two factors: the size of
the key, and the mathematical structure of the algorithm
itself. For well-designed symmetric cryptographic systems,
"brute-force" exhaustive search -- trying all possible keys
with a given decryption algorithm until the (meaningful)
plaintext appears -- is the best publicly known cryptanalytic
method. For such systems the work factor (i.e., the time to
cryptanalyze) grows exponentially with key size. Hence, with
a sufficiently long key, even an eavesdropper with very
extensive computing resources would have to take a very long
time (longer than the age of the universe) to test all
possible combinations. Adding one binary digit (bit) to the
length of a key doubles the length of time it takes to
undertake a brute-force attack while adding only a very small
increment to the time it takes to encrypt the plaintext.

   How long is a "long" key? To decipher by brute force a
message encrypted with a 40-bit key requires 2^40
(approximately 10^12) tests. If each test takes 10^-6 seconds
to conduct, 1 million seconds of testing time on a single
computer are required to conduct a brute-force attack, or
about 11.5 days. A 56-bit key increases this time by a factor
of 2^16, or 65,536; under the same assumptions, a brute-force
attack on a message encrypted with a 56-bit key would take
over 2,000 years.

   Two important considerations mitigate the bleakness of this
conclusion from the perspective of the interceptor. One is
that computers can be expected to grow more powerful over
time. Speed increases in the underlying silicon technology
have exhibited a predictable pattern for the past 50 years --
computational speed doubles every 18 months (Moore's law),
equivalent to increasing by a factor of 10 every 5 years.
Thus, if a single test takes 10^-6 seconds today, in 15 years,
it can be expected to take 10^-9 seconds. Additional speedup
is possible using parallel processing. Some supercomputers use
tens of thousands of microprocessors in parallel, and
cryptanalytic problems are particularly well-suited to
parallel processing. Even 1,000 processors working in
parallel, each using the underlying silicon technology of 15
years hence, would be able to decrypt a single 56-bit
encrypted message in 18 hours.

   As for the exploitation of alternatives to brute-force
search, all known asymmetric (i.e., public-key) cryptographic
systems allow shortcuts to exhaustive search. Because more
information is public in such systems, it is also likely that
shortcut attacks will exist for any new systems invented.
Shortcut attacks also exist for poorly designed symmetric
systems. Newly developed shortcut attacks constitute
unforeseen breakthroughs, and so by their very nature
introduce an unpredictable "wild card" into the effort to set
a reasonable key size. Because such attacks are applicable
primarily to public-key systems, larger key sizes and larger
safety margins are needed for such systems than for symmetric
cryptographic systems. For example, factoring a 512-bit number
by exhaustive search would take 2^256 tests (since at least
one factor must be less than 2^256); known shortcut attacks
would allow such numbers to be factored in approximately 2^65
operations, a number on the order of that required to
undertake a brute-force exhaustive search of a message
encrypted with a 64-bit symmetric cryptographic system. While
symmetric 64-bit systems are considered relatively safe, fear
of future breakthroughs in cryptanalyzing public-key systems
has led many cryptographers to suggest a minimum key size of
1,024 bits for public-key systems, thereby providing in key
length a factor-of-two safety margin over the safety afforded
by 512-bit keys.

   More discussion of this topic can be found in Appendix C.

[End Chapter 2]

____________________________________________________________








[Head note all pages: May 30, 1996, Prepublication Copy
Subject to Further Editorial Correction]


                              3

          Needs for Access to Encrypted Information


   Information protected for confidentiality (i.e., encrypted
information) is stored or communicated for later use by
certain parties with the authorization of the original
protector. However, it may happen for various legitimate and
lawfully authorized reasons that other parties may need to
recover this information as well. This chapter discusses needs
for access to encrypted information under exceptional
circumstances for legitimate and lawfully authorized purposes
from the perspectives of businesses, individuals, law
enforcement, and national security. Businesses and individuals
may want access to encrypted data or communications for their
own purposes, and thus may cooperate in using products to
facilitate such access, while law enforcement and national
security authorities may want access to the encrypted data or
communications of criminals and parties hostile to the United
States.


                       3.1 TERMINOLOGY

   It is useful to conceptualize data communications and data
storage using the language of transactions. For example, one
individual may telephone another; the participants in the
transaction are usually referred to as the calling party and
the called party. Or, a person makes a purchase; the
participants are called the buyer and seller. Or, a sender
mails something to the recipient. Adopting this construct,
consider communications in which the first party (Party A)
sends a message and the second party (Party B) receives it.
"Party" does not necessarily imply a person; a "party" can be
a computer system, a communication system, a software process.
In the case of data storage, Party A stores the data, while
Party B retrieves it. Note that Party A and Party B can be the
same party (as is the case when an individual stores a file
for his or her own later use).

   Under some circumstances, a third party may be authorized
for access to data stored or being communicated. For example,
law enforcement authorities may be granted legal authorization
to obtain surreptitious access to a telephone conversation or
a stored data file or record without the knowledge of Parties
A or B. The employer of Party A may have the legal right to
read all data files for which Party A is responsible or to
monitor all communications in which Party A participates.
Party A might inadvertently lose access to a data file and
wish to recover that access.

   In cases when the data involved is unencrypted, the
procedures needed to obtain access can be as simple as
identifying the relevant file name or as complex as seeking a
court order for legal authorization. But when the data
involved is encrypted, the procedures needed to obtain access
will require the possession of certain critical pieces of
information, such as the relevant cryptographic keys.

   Third-party access has many twists and turns. When it is
necessary for clarity of exposition or meaning, this report
uses the phrase "exceptional access" to stress that the
situation is not one that was included within the intended
bounds of the original transaction, but is an unusual
subsequent event. Exceptional access refers to situations in
which an authorized party needs and can obtain the plaintext
of encrypted data (for storage or communications). The word
"exceptional" is used in contrast to the word "routine" and
connotes something unusual about the circumstances under which
access is required.

   Exceptional access can be divided into three generic
categories:

   +    *Government exceptional access* refers to the case in
which government has a need for access to information under
specific circumstances authorized by law. For example, a
person might store data files that law enforcement authorities
need to prosecute or investigate a crime. Alternatively, two
people may be communicating with each other in the planning or
commission of a serious crime. Government exceptional access
thus refers to the government's need to obtain the relevant
information under circumstances authorized by law, and
requires a court order (for access to voice or data
communications) or a subpoena or search warrant (for access to
stored records). Government exceptional access is the focus of
Section 3.2.

   +    *Employer (or corporate) exceptional access* refers to
the case in which an employer (i.e., the corporate employer)
has the legal right to access to information encrypted by an
employee. If an employee who has encrypted a file is
indisposed on a certain day, for example, the company may need
exceptional access to the contents of the file. Alternatively,
an employee may engage in communications whose content the
company may have a legitimate need to know (e.g., the employee
may be leaking proprietary information). Employer exceptional
access would then refer to the company's requirement to obtain
the key necessary to obtain the contents of the file or
communications, and may require the intervention of another
institutional entity. Employer or corporate exceptional access
is the focus of Section 3.5.

   +    *End-user exceptional access* refers to the case in
which the parties primarily intended to have access to
plaintext have lost the means to obtain such access. For
example, a single user may have stored a file for later
retrieval, but encrypted it to ensure that no other party
would have access to it while it was in storage. However, the
user might also lose or forget the key used to encrypt that
file. End-user exceptional access refers to such a user's
requirement to obtain the proper key, and may require that the
individual who has lost a key prove his identify to a party
holding the backup key and verify his authorization to obtain
a duplicate copy of his key. End-user exceptional access is
also discussed in Section 3.5.

   The need for exceptional access when the information stored
or communicated is encrypted has led to an examination of a
concept generically known as escrowed encryption (the subject
of Chapter 5), which, loosely speaking, uses agents other than
the parties participating in the communication or data storage
to hold copies of or otherwise have access to relevant
cryptographic keys "in escrow" so that needs for end-user,
corporate, and government exceptional access can be met; these
agents are called escrow agents.


     3.2 LAW ENFORCEMENT: INVESTIGATION AND PROSECUTION


   Obtaining information (both evidence and intelligence) has
always been a central element in the conduct of law
enforcement investigations and prosecutions. Accordingly,
criminals have always wished to protect the information
relevant to their activities from law enforcement authorities.


          3.2.1 The Value of Access to Information
                     for Law Enforcement

   Many criminals keep records related to their activities;
such records can be critical to the investigation and
prosecution of criminal activity. For example, criminals
engaged in white-collar crimes such as fraud often leave paper
trails that detail fraudulent activities; drug dealers often
keep accounting records of clients, drop-offs, supplies, and
income. Reconstruction of these paper trails is often a
critical element in building a case against these individuals.
The search-and-seizure authority of law enforcement to obtain
paper records is used in a large fraction of criminal cases.
Law enforcement officials believe that wiretapping is a
crucial source for information that could not be obtained in
any other way or obtained only at high risk (Box 3.2). For
example, the FBI has testified that

   [w]ithout law enforcement's ability to effectively execute
   court orders for electronic surveillance, the country would
   be unable to protect itself against foreign threats,
   terrorism, espionage, violent crime, drug trafficking,
   kidnapping, and other crimes. We may be unable to intercept
   a terrorist before he sets off a devastating bomb; unable
   to thwart a foreign spy before he can steal secrets that
   endanger the entire country; and unable to arrest drug
   traffickers smuggling in huge amounts of drugs that will
   cause widespread violence and death. Court-approved
   electronic surveillance is of immense value, and often is
   the only way to prevent or solve the most serious crimes
   facing today's society.(1)

   Criminals often discuss their past criminal activity and
plans for future criminal activity with other parties.
Obtaining "inside information" on such activities is often a
central element of building a case against the perpetrators.
A defendant that describes in his own words how he committed
a crime or the extent to which he was involved in it gives
prosecutors a powerful weapon that juries tend to perceive as
fair.(2)

   Other methods of obtaining "inside information" have
significant risks associated with them:

   +    Informants are often used to provide inside
information. However, the credibility of informants is often
challenged in court, either because the informants have shady
records themselves or because they may have made a deal with
prosecutors by agreeing to serve as informants in return for
more lenient treatment.(3) By contrast, challenges to evidence
obtained through wiretaps are far more frequently based on
their admissibility in court rather than their intrinsic
credibility. Informants may also be more difficult to find
when a criminal group is small in size.

   +    Surreptitiously planted listening devices are also
used to obtain inside information. However, they generally
obtain only one side of a conversation (use of a speaker-phone
presents an exception). Further, since listening devices
require the use of an agent to plant them, installation of
such devices is both highly intrusive (arguably more so than
wiretapping) for the subject of the device and risky for the
planting agent. Requests for the use of such devices are
subject to the same judicial oversight and review as wiretaps.

   This discussion is not intended to suggest that wiretaps
are a perfect source of information and always useful to law
enforcement. An important difficulty in using wiretaps is that
context is often difficult for listeners to establish when
they are monitoring a telephone conversation that assumes
shared knowledge between the communicators.(4)

   Because of the legal framework regulating wiretaps, and the
fact that communications are by definition transient whereas
records endure, wiretapping is used in far fewer criminal
cases than is seizure of records. Although the potential
problems of denying law enforcement access to communications
has been the focus of most of the public debate, encryption of
data files in a way that denies law enforcement authorities
access to data files relevant to criminal activity arguably
presents a much larger threat to their capabilities.

----------

   (1)  Statement of James K. Kallstrom, Special Agent in
Charge, Special Operations Division, New York Field Division,
Federal Bureau of Investigation on "Security Issues in
Computers and Communications," before the Subcommittee on
Technology, Environment, and Aviation of the Committee on
Science, Space, and Technology, U.S. House of Representatives,
May 3, 1994.

   (2)  For example, see Edward Walsh, "Reynolds Guilty on All
Counts," *Washington Post*, August 23, 1995, p. 1.

   (3)  See for example, Sharon Walsh, "Whistle-Blower
Quandry: Will Testimony Fly?," *Washington Post*, August 23,
1995, p. F-3; Richard Perez-Pena, "An Informer's Double Life:
Blows Come from 2 Sides," *New York Times*, October 15, 1995,
p. 35; Joseph P. Fried, "Undermining a Bomb-Trial Witness,"
*New York Times*, April 9, 1995, p. 42; and Stephen Labaton,
"The Price Can Be High for Talk That's Cheap," *New York
Times*, Week in Review, April 2, 1995, p. 3.

   (4)  Indeed, in some instances, wiretap evidence has been
used to *exculpate* defendants. See for example, Peter Marks,
"When the Best Defense is the Prosecution's Own Tapes," *New
York Times*, June 30, 1995, p. D-20. According to Roger Shuy,
professor of linguistics at Georgetown University, there are
many difficulties in ascribing meaning to particular
utterances that may be captured on tape recordings of
conversations. See Roger Shuy, *Language Crimes*, Blackwell
Publishers, Cambridge, Mass., 1993. Shuy's book is mostly
focused on tapes made by "wires" carried by informants or
"bugs" placed near a subject, but the basic principle is the
same.

____________________________________________________________


      3.2.2 The Legal Framework Governing Surveillance

   An evolving legal framework governs the authority of
government authorities to undertake surveillance of
communications that take place within the United States or
that involve U.S. persons. Surveillance within the United
States is authorized only for certain legislatively specified
purposes: the enforcement of certain criminal statutes and the
collection of foreign intelligence. A more extended
description of this framework (with footnoted references) is
contained in Appendix D.


Domestic Communications Surveillance
for Domestic Law Enforcement Purposes

   Communications surveillance can involve surveillance for
traffic analysis and/or surveillance for content; these
separate activities are governed by different laws and
regulations. Traffic analysis, a technique that establishes
patterns of connections and communications, is performed with
the aid of pen registers that record the numbers dialed from
a target telephone, and trap-and-trace devices that identify
the numbers of telephones from which calls are placed to the
target telephone. Orders for the use of these devices may be
requested by any federal attorney and granted by any federal
district judge or magistrate, and are granted on a more or
less pro forma basis.

   Surveillance of communications for content for purposes of
domestic law enforcement is governed by Title 18, United
States Code, Sections 2510-2521 concerning "wire and
electronic communications interceptions and interception of
all communications," generally known as Title III. These
sections of the U.S. code govern the use of listening devices
(usually known as "bugs"); wiretaps of communications
involving human speech (called "oral communications" in Title
III) carried over a wire or wire-like cable, including optical
fiber; and other forms of electronically transmitted
communication, including various forms of data, text, and
video that may be communicated between or among people as well
as computers or communications devices. Under Title III, only
certain federal crimes may be investigated (e.g., murder,
kidnapping, child molestation, racketeering, narcotics
offenses) through the interception of oral communications. In
addition, 37 states have passed laws that are similar to Title
III, but they include such additional restrictions as allowing
only a fixed number of interceptions per year (Connecticut) or
only for drugrelated crimes (California). State wiretaps
account for the majority of wiretaps in the United States.

   Surveillance of oral communications governed under Title
III in general requires a court order (i.e., a warrant)
granted at the discretion of a judge.(5) Because electronic
surveillance of oral communications is both inherently
intrusive and clandestine, the standards for granting a
warrant for such surveillance are more stringent than those
required by the Fourth Amendment. These additional
requirements are specified in Title III and are enforced by
criminal and civil penalties applicable to law enforcement
officials or private citizens, and by a statutory exclusionary
rule that violations of the central features of requirements
may lead to suppression of evidence in a later trial, even if
such evidence meets the relevant Fourth Amendment test.

   Because of the resources required, the administrative
requirements for the application procedure, and the legal
requirement that investigators exhaust other means of
obtaining information, wiretaps are not often used.
Approximately 1,000 orders (both federal and state) are
authorized yearly (a number small compared to the number of
felonies investigated, even if such felonies are limited to
those specified in Title III as eligible for investigation
with wiretaps).(6) About 2,500 conversations are intercepted
per order, and the total number of conversations intercepted
is a very small fraction of the annual telephone traffic in
the United States.

   Surveillance of nonvoice communications, including fax and
electronic communications, is also governed by Title III.(7)
The standard for obtaining an intercept order for electronic
communications is less stringent than that for intercepting
voice communications. For example, any federal felony may be
investigated through electronic interception. In addition, the
statutory exclusionary rule of Title III for oral and wire
communications does not apply to electronic communications.

   Despite the legal framework outlined above, it is
nevertheless possible that unauthorized or unlawful
surveillance, whether undertaken by rogue law enforcement
officials or overzealous private investigators, also occurs.
Concerns over such activity are often expressed by critics of
the current administration policy, and they focus on two
scenarios:

   +    With current telephone technology, it is sometimes
technically possible for individuals (e.g., private
investigators, criminals, rogue law enforcement personnel) to
undertake wiretaps on their own initiative (e.g., by placing
alligator clips on the proper terminals in the telephone box
of an apartment building). Such wiretaps would subject the
personnel involved to Title III criminal penalties, but
detection of such wiretaps might well be difficult. On the
other hand, it is highly unlikely that such a person could
obtain the cooperation of major telephone service providers
without a valid warrant or court order, and so these wiretaps
would have to be conducted relatively close to the target's
telephone, and not in a telephone switching office.

   +    Information obtained through a wiretap in violation of
Title III can be suppressed in court, but such evidence may
still be useful in the course of an investigation.
Specifically, such evidence may cue investigators regarding
specific areas that would be particularly fruitful to
investigate, and if the illegal wiretap is never discovered,
a wiretap that provides no court-admissible evidence may still
prove pivotal to an investigation.(8) (Even if it is
discovered, different judges apply the doctrine of discarding
"the fruit of the poisonous tree" with different amounts of
rigor.)

   The extent to which these and similar scenarios actually
occur is hard to determine. Information provided by the FBI to
the committee indicates a total of 187 incidents of various
types (including indictment/complaints and convictions/
pretrial diversions) involving charges of illegal electronic
surveillance (whether subsequently confimed or not) over the
past 5 fiscal years (1990 through 1994).(9)

----------

   (5)  Emergency intercepts may be performed without a
warrant in certain circumstances, such as physical danger to
a person or conspiracy against the national security. There
has been "virtually no use" of the emergency provision, and
its constitutionality has not been tested in court. Wayne R.
LaFave and Jerold H. Israel, *Criminal Procedure*, West
Publishing Company, St. Paul, Minnesota, 1992, p. 254.

   (6)  Some analysts critical of the U.S. government position
on wiretaps have suggested that the actual distribution of
crimes investigated under Title Ill intercept or surveillance
orders may be somewhat inconsistent government claims of the
high value of such orders. (See, for example, testimony of
David B. Kopel, Cato Institute, "Hearings on Wiretapping and
Other Terrorism Proposals," Committee on the Judiciary, U.S.
Senate, May 24, 1995, also available on line at
http://www.cato.org/ct5-24-5.html.) For example, Table D.3 in
Appendix D indicates that no cases involving arson,
explosives, or weapons were investigated using Title III
wiretaps in 1988. The majority of Title III orders have
involved drug and gambling crimes.

   (7)  Note that when there is no reasonable expectation of
privacy, law enforcement officials are not required to
undertake any special procedure to monitor such
communications. For example, a law enforcement official
participating in an on line "chat" group is not required to
identify himself as such, nor must he obtain any special
permission at all monitor the traffic in question. However, as
a matter of policy, the FBI does not systematically monitor
electronic forums such as Internet relay chats.

   (8)  Such concerns are raised by reports of police
misconduct as described in Chapter 1.

   (9)  The committee recognizes the existence of controversy
over the question of whether such reports should be taken at
face value. For example, critics of the U.S. government who
believe that law enforcement authorities are capable of
systematically abusing wiretap authority argue that law
enforcement authorities would not be expected to report
figures that reflected such abuse. Alternatively, it is also
possible that cases of improper wiretaps are in fact more
numerous than reported and have simply not come to the
attention of the relevant authorities. The committee discussed
such matters and concluded that it had no reason to believe
that the information it received on this subject from law
enforcement authorities was in any way misleading.

____________________________________________________________


Domestic Communications Surveillance
for Foreign Intelligence Purposes

   The statute governing interception of electronic
communications for purposes of protecting national security is
known as the Foreign Intelligence Surveillance Act (FISA),
which has been codified as Sections 1801 to 1811 in Title 18
of the U.S. Code. Passed in 1978, FISA was an attempt to
balance Fourth Amendment rights against the constitutional
responsibility of the executive branch to maintain national
security. FISA is relevant only to communications occurring at
least partly within the United States (wholly, in the case of
radio communications), although listening stations used by
investigating officers may be located elsewhere, and FISA
surveillance may be performed only against foreign powers or
their agents. Interception of communications, when the
communications occur entirely outside the United States,
whether or not the participants include U.S. persons, is not
governed by FISA, Title III, or any other statute. However,
when a U.S. person is outside the United States, Executive
Order 12333 governs any communications intercepts targeted
against such individuals.

   The basic framework of FISA is similar to that of Title
III, with certain important differences, among which are the
following:

   +    The purpose of FISA surveillance is to obtain foreign
intelligence information, defined in terms of U.S. national
security, including defense against attack, sabotage,
terrorism, and clandestine intelligence activities, among
others. The targeted communications need not relate to any
crime or be relevant as evidence in court proceedings.

   +    In most instances, a FISA surveillance application
requires a warrant based on probable cause that foreign
intelligence information will be collected.(10) Surveillance
of a U.S. person (defined as a U.S. citizen, U.S. corporation
or association, or legal resident alien) also requires
probable cause showing that the person is acting as a foreign
agent. Political and other activities protected by the First
Amendment may not serve as the basis for treating a U.S.
person as a foreign agent.

   +    Targets of FISA surveillance might never be notified
that communications have been intercepted.

   Since 1979, there have been an average of over 500 FISA
orders per year. In 1992, 484 were issued. Other information
about FISA intercepts is classified.

----------

   (10) Surveillance may take place without a court order for
up to 1 year if the Attorney General certifies that there is
very little likelihood of intercepting communications
involving U.S. persons and that the effort will target
facilities used exclusively by foreign powers. Under limited
circumstances, emergency surveillance may be performed before
a warrant is obtained. Clifford S. Fishman, *Wiretapping and
Eavesdropping: Cumulative Supplement*, Clark Boardman
Callaghan, Deerfield, Ill., November 1994 sections 361, 366.

____________________________________________________________


           3.2.3 The Nature of Surveillance Needs
                     of Law Enforcement

   In cooperation with the National Technical Investigators
Association, the FBI has articulated a set of requirements for
its electronic surveillance needs (Box 3.3). Of course, access
to surveillance that does not meet all of these requirements
is not necessarily useless. For example, surveillance that
does not meet the transparency requirement may still be quite
useful in certain cases (e.g., if the subjects rationalize the
lack of transparency as "static on the line"). The basic point
is that these requirements constitute a set of continuous
metrics by which the quality of a surveillance capability can
be assessed, rather than a list that defines what is or is not
useful surveillance. Of these requirements, the real-time
requirement is perhaps the most demanding. The FBI has noted
that

   [s]ome encryption products put at risk efforts by federal,
   state and local law enforcement agencies to obtain the
   contents of intercepted communications by precluding
   real-time decryption. Real-time decryption is often
   essential so that law enforcement can rapidly respond to
   criminal activity and, in many instances, prevent serious
   and life-threatening criminal acts.(11)

   Real-time surveillance is generally less important for
crimes that are prosecuted or investigated than for crimes
that are prevented because of the time scales involved.
Prosecutions and investigations take place on the time scales
of days or more, whereas prevention may take place on the time
scale of hours. In some instances, the longer time scale is
relevant: because Title III warrants can be issued only when
"probable cause" exists that a crime has been committed, the
actual criminal act is committed before the warrant is issued,
and thus prevention is no longer an issue. In other instances,
information obtained under a valid Title III warrant issued to
investigate a specific criminal act can be used to prevent a
subsequent criminal act, in which case the shorter time scale
may be relevant. The situation is similar under FISA, in which
warrants need not necessarily be obtained in connection with
any criminal activity. A good example is terrorism cases, in
which it is quite possible that real-time surveillance could
provide actionable information useful in thwarting an imminent
terrorist act.

----------

   (11) Statement of James K. Kallstrom, Special Agent in
Charge, Special Operations Division, New York Field Division,
Federal Bureau of Investigation on "Security Issues in
Computers and Communications," before the Subcommittee on
Technology, Environment, and Aviation of the Committee on
Science, Space, and Technology, U.S. House of Representatives,
May 3, 1994. An illustrative example is an instance in which
the FBI was wiretapping police officers who were allegedly
guarding a drug shipment. During that time, the FBI overheard
a conversation between the police chief and several other
police officials that the FBI believes indicated a plot to
murder a certain individual who had previously filed a police
brutality complaint against the chief. (However, the FBI was
unable to decode the police chief's "street slang and police
jargon" in time to prevent the murder.) See Paul Keegan, "The
Thinnest Blue Line," *New York Times Magazine*, March 31,
1996, pp. 32-35.

____________________________________________________________


       3.2.4 The Impact of Cryptography and New Media
      on Law Enforcement (Stored and Communicated Data)

   Cryptography can affect information collection by law
enforcement officials in a number of ways. However, for
perspective, it is important to keep in mind a broader context
-- namely that advanced information technologies (of which
cryptography is only one element) have potential impacts
across many different dimensions of law enforcement; Box 3.4
provides some discussion of this point.


Encrypted Communications

             As far as the committee has been able to determine,
criminal use of digitally encrypted voice communications has
not presented a significant problem to law enforcement to
date.(12) On rare occasions, law enforcement officials
conducting a wiretap have encountered "unknown signals" that
could be encrypted traffic or simply a data stream that was
unrecognizable to the intercept equipment. (For example, a
high-speed fax transmission might be transported on a
particular circuit; a monitoring agent might be unable to
distinguish between the signal of the fax and an encrypted
voice signal with the equipment available to him.)

   The lack of criminal use of encryption in voice
communications most likely reflects the lack of use of
encryption by the general public. Moreover, files are more
easily encrypted than communications, simply because the use
of encrypted communications presumes an equally sophisticated
partner, whereas only one individual must be knowledgeable to
encrypt files. As a general rule, criminals are most likely to
use what is available to the general public, and the
encryption available to and usable by the public has to date
been minimal. At the same time, sophisticated and wealthy
criminals (e.g., those associated with drug cartels) are much
more likely to have access to and to use cryptography.(13)

   In data communications, one of the first publicized
instances of law enforcement use of a Title III intercept
order to monitor a suspect's electronic mail occurred in
December 1995, when the customer of an on-line service
provider was the subject of surveillance during a criminal
investigation.(14) E-mail is used for communications; a
message is composed at one host, sent over a communications
link, and stored at another host. Two opportunities exist to
obtain the contents of an e-mail message -- the first while
the message is in transit over the communications link, and
the second while it is resident on the receiving host. From a
technical perspective, it is much easier to obtain the message
from the receiving host, and this is what happened in the
December 1995 instance. (Appendix D contains more detail on
how electronic communications are treated under Title III.)

   Federal law enforcement authorities believe that encryption
of communications (whether voice or data) will be a
significant problem in the future. FBI Director Louis Freeh
has argued that "unless the issue of encryption is resolved
soon, criminal conversations over the telephone and other
communications devices will become indecipherable by law
enforcement. This, as much as any issue, jeopardizes the
public safety and national security of this country. Drug
cartels, terrorists, and kidnappers will use telephones and
other communications media with impunity knowing that their
conversations are immune from our most valued investigative
technique." l5 In addition, the initial draft of the digital
telephony bill called for telephone service providers to
deliver the plaintext of any encrypted communications they
carried, a provision that was dropped in later drafts of the
bill.(16)

----------

   (12) In this regard, it is important to distinguish between
"voice scramblers" and encrypted voice communications. Voice
scramblers are a relatively old and widely availab]e
technology for concealing the contents of a voice
communication; they transform the analog waveform of a voice
and have nothing to do with encryption per se. True encryption
is a transformation of digitally represented data. Voice
scramblers have been used by criminals for many years, whereas
devices for digital encryption remain rare.

   (13) For example, police raids in Colombia on offices of
the Cali cartel resulted in the seizure of advanced
communications devices, including radios that distort voices,
videophones to provide visual authentication of callers'
identities, and devices for scrambling computer modem
transmissions. The Colombian defense minister was quoted as
saying that the CIA had told him that the technological
sophistication of the Cali cartel was about equal to that of
the KGB at the time the Soviet Union's collapse. See James
Brooke, "Crackdown Has Cali Drug Cartel on the Run," *New York
Times*, June 27, 1995, p. A-1.

   (14) See Gautam Naik, "U.S., Using E-Mail Tap, Charges
Three with Operating Cellular-Fraud Ring," *Wall Street
Journal*, January 2, 1996, p. B-16.

   (15) See the Prepared Statement of Louis J. Freeh,
Director, Federal Bureau of Investigation, for the Federal
Drug Law Enforcement Hearing before the House Judiciary
Committee, Subcommittee on Crime, U.S. House of
Representatives, March 30, 1995.

   (16) The final bill provides that "a telecommunications
carrier shall not be responsible for decrypting, or ensuring
the government's ability to decrypt, any communication
encrypted by a subscriber or customer, unless the encryption
was provided by the carrier and the carrier possesses the
information necessary to decrypt the communication."

____________________________________________________________


Encrypted Data Files

   Encryption by criminals of computer-based records that
relate to their criminal activity is likely to pose a
significant problem for law enforcement in the future. FBI
Director Freeh has noted publicly(17) two instances in which
encrypted files have already posed a problem for law
enforcement authorities: a terrorist case in the Philippines
involving a plan to blow up a U.S. airliner as well as a plan
to assassinate the Pope in late 1994,(18) and the "Innocent
Images" child pornography case of 1995 in which encrypted
images stood in the way of grand jury access procedures.(19)
Furthermore, Director Freeh told the committee that the use of
stored records in criminal prosecutions and investigations was
much more frequent than the use of wiretaps.

   The problem of encrypted data files is similar to the case
in which a criminal keeps books or records in a code or a
language that renders them unusable to anyone else -- in both
instances, the cooperation of the criminal (or someone else
with access to the key) is necessary to decipher the records.
The physical records as well as any recorded version of the
key, if such a record exists, are available through a number
of standard legal mechanisms, including physical search
warrants and subpoenas. On the other hand, while the nature of
the problem itself is the same in both instances, the ease and
convenience of electronic encryption, especially if performed
automatically, may increase the frequency with which
encryption is encountered and/or the difficulties faced by law
enforcement in cryptanalyzing the material in question without
the cooperation of the criminal.

   Finally, the problem of exceptional access to stored
encrypted information is more easily solved than the problem
of exceptional access to encrypted communications. The reason
is that for file decryption, the time constraints are
generally less stringent. A file may have existed for many
days or weeks or even years, and the time within which
decryption is necessary (e.g., to build a criminal case) is
measured on the time scale of investigatory activities; by
contrast, the relevant time scale in the case of decrypting
communications may be the time scale of operations, which
might be as short as minutes or hours.

----------

   (17) Speech of FBI Director Louis Freeh, before the
International Cryptography Institute, Washington, D.C.,
September 21, 1995.

   (18) A general discussion of this case is found in Phillip
Shenon, "World Trade Center Suspect Linked to Plan to Blow Up
2 Planes," *New York Times*, March 26, 1995, p. 37.

   (19) A general discussion of the Innocent Images case is
found in Kara Swisher, "On-Line Child Pornography Charged As
12 Are Arrested," *Washington Post*, September 14, 1995, p. 1.

____________________________________________________________


      3.3 NATIONAL SECURITY AND SIGNALS INTELLIGENCE(20)


   Cryptography is a two-edged sword for U.S. national
security interests. Cryptography is important in maintaining
the security of U.S. classified information (Appendix I), and
the U.S. government has developed its own cryptographic
systems to meet these needs. At the same time, the use of
cryptography by foreign adversaries also hinders U.S.
acquisition of communications intelligence. This section
discusses the latter. (Appendix F contains a short primer on
intelligence.)


         3.3.1 The Value of Signals Intelligence(21)

   Signals intelligence (SIGINT) is a critically important arm
of U.S. intelligence, along with imagery intelligence (IMINT)
and intelligence information collected directly by people,
i.e., human intelligence (HUMINT). SIGINT also provides timely
tip-off and guidance to IMINT and HUMINT collectors and is, in
turn, tipped off by them. As in the case of law enforcement,
the information contained in a communications channel treated
by an opponent as secure is likely to be free of intentional
deception.

   The committee has received both classified and unclassified
assessments of the current value of SIGINT and finds that the
level of reporting reflects a continuing capability to produce
both tactical and strategic information on a wide range of
topics of national intelligence interest. SIGINT production is
responding to the priorities established by Presidential
Decision Directive 35. As publicly described by President Bill
Clinton in remarks made to the staff of the CIA and
Intelligence Community, the priorities are as follows:

   +    "First, the intelligence need of our military during
an operation ...,

   +    Second, political, economic and military intelligence
about countries hostile to the United States. We must also
compile all-source information on major political and economic
powers with weapons of mass destruction who are potentially
hostile to us,

   +    Third, intelligence about specific trans-national
threats to our security, such as weapons proliferation,
terrorism, drug trafficking, organized crime, illicit trade
practices and environmental issues of great gravity."(22)

   SIGINT is one valuable component of the overall U.S.
intelligence capability. It makes important contributions to
ensure an informed, alert, and secure environment for U.S. war
fighters and policy makers.

----------

   (20) One note on terminology: In the signals intelligence
community, the tenn "access" is used to refer to obtaining the
desired signals, whether those signals are encrypted or not.
This use conflicts with the usage adopted in this report, in
which "access" generally means obtaining the information
contained in a signal (or message or file).

   (21) This report deals only with the communications
intelligence (COMINT) aspects of SIGINT; see Appendix F for a
discussion of electronic intelligence (ELINT) and its
relationship to COMINT.

   (22) Office of the Press Secretary, The White House,
"Remarks by the President to Staff of the CIA and Intelligence
Community," Central Intelligence Agency, McLean, Virginia,
July 14, 1995.

____________________________________________________________


SIGINT Support of Military Operations

   SIGINT is important to both tactical and strategic
intelligence. Tactical intelligence provides operational
support to forces in the field, whether these forces are
performing military missions or international law enforcement
missions (e.g., as in drug eradication raids in Latin America
conducted in cooperation with local authorities). The tactical
dimensions were most recently demonstrated in the Gulf War
through a skillfully orchestrated interaction of SIGINT,
IMINT, and HUMINT that demonstrated the unequaled power of
U.S. intelligence. SIGINT produced timely command and control
intelligence and specific signal information to support
electronic warfare; IMINT provided precise locating
information to permit precision bombing, together with HUMINT;
SIGINT and IMINT provided the field commands with an
unprecedented degree of battlefield awareness.

   History also demonstrates many instances in which SIGINT
has proven decisive in the conduct of tactical military
operations. These instances are more easily identified now
because the passage of time has made the information less
sensitive.

   +    The American naval victory at the Battle of Midway and
the destruction of Japanese merchant shipping resulted, in
part, from Admiral C.W. Nimitz's willingness to trust the
SIGINT information he received from his intelligence staff.
General George Marshall wrote that as the result of this
SIGINT information, "we were able to concentrate our limited
forces to meet [the Japanese] naval advance on Midway when
otherwise we almost certainly would have been some 3,000 miles
out of place."(23)

   +    The shoot-down in April 1943 of the commander-in-chief
of the Japanese Navy, Admiral Isoroku Yamamoto, was the direct
result of a signals intercept that provided his detailed
itinerary for a visit to the Japanese front lines.(24)

   +    The U.S. Navy was able to compromise the operational
code used by German U-boats in the Atlantic in 1944, with the
result that large numbers of such boats were sunk.(25)

   +    Allied intercepts of German army traffic were
instrumental in the defense of the Anzio perimeter in Italy in
February 1944, a defense that some analysts believe was a
tuming point in the Italian campaign; these intercepts
provided advance knowledge of the German timing, direction,
and weight of assault, and enabled Allied generals to
concentrate their resources in the appropriate places.(26)

   While these examples are 50 years old, the nature of
warfare is not so different today as to invalidate the utility
of successful SIGINT. A primary difference between then and
now is that the speed of warfare has increased substantially,
placing a higher premium on real-time or near-real-time
intercepts. Since the end of World War II, SIGINT has provided
tactical support to every military operation involving U.S.
forces.

   Other types of tactical intelligence to which SIGINT can
contribute include indications and warning efforts (detecting
an adversary's preparations to undertake armed hostilities);
target identification, location, and prioritization (what
targets should be attacked, where they are, and how important
they are); damage assessment (how much damage an attacked
target sustained); and learning the enemy's rules of
engagement (under what circumstances an adversary is allowed
to engage friendly forces).

----------

   (23) A good discussion of these topics is given in Kahn,
*The Codebreakers*, 1967, pp. 561-573 (Midway) and pp. 593-594
(merchant shipping).

   (24) See Kahn, *The Codebreakers*, 1967, pp. 595-601.

   (25) Kahn, *The Codebreakers*, 1967, pp 504-507.

   (26) See Ralph Bennett, *Ultra and Mediterranean Strategy*,
William Morrow and Company, New York, 1989, pp. 265-269.

   (27) See Kahn. *The Codebreakers*, 1967, pp. 358-359.

____________________________________________________________


SIGINT Support of Strategic Intelligence

   Strategic (or national) intelligence is intended to provide
analytical support to senior policy makers, rather than field
commanders. In this role, strategic or national intelligence
serves foreign policy, national security, and national
economic objectives. Strategic intelligence focuses on foreign
political and economic events and trends, as well as on
strategic military concerns such as plans, doctrine,
scientific and technical resources, weapon system
capabilities, and nuclear program development. History also
demonstrates the importance of SIGINT in a diplomatic,
counter-intelligence, and foreign policy context:

   +    In the negotiations following World War I over a
treaty to limit the tonnage of capital ships (the Washington
Conference on Naval Arms Limitations), the U.S. State
Department was able to read Japanese diplomatic traffic
instructing its diplomats. One particular decoded intercept
provided the bottom line in the Japanese position, information
that was useful in gaining Japanese concessions.(27)

   +    Recently Director of Central Intelligence John Deutch
unveiled the so-called VENONA material, decrypted Soviet
intelligence service messages of the mid-1940s that revealed
Soviet espionage against the U.S. atomic program.(28)
Intelligence about the Cuban missile crisis has been released.
Although primarily a story about U-2 photography, the role of
SIGINT is included as well.

   +    Decrypted intercepts of allied communications in the
final months of World War II played a major role in assisting
the United States to achieve its goals at the conference
called to decide on the United Nations charter. American
policy makers knew the negotiating positions of nearly all of
the participating nations and thus were able to control the
debate to a considerable degree.(29)

   +    During the Cold War, SIGINT provided information about
adversary military capabilities, weapons production, command
and control, force structure and operational planning, weapons
testing, and activities of missile forces and civil defense.

   In peacetime as in combat, each of the intelligence
disciplines can contribute critical information in support of
national policy. Former Director of Central Intelligence
Admiral Stansfield Turner has pointed out that "[e]lectronic
intercepts may be even more useful [than human agents] in
discerning intentions. For instance, if a foreign official
writes about plans in a message and the United States
intercepts it, or if he discusses it and we record it with a
listening device, those verbatim intercepts are likely to be
more reliable than second-hand reports from an agent."(30) He
also noted that "as we increase emphasis on securing economic
intelligence, we will have to spy on the more developed
countries -- our allies and friends with whom we compete
economically -- but to whom we turn first for political and
military assistance in a crisis. This means that rather than
instinctively reaching for human, on-site spying, the United
States will want to look to those impersonal technical
systems, primarily satellite photography and intercepts."(31)

   Today, the United States conducts the largest SIGINT
operation in the world in support of information relevant to
conventional military threats; the proliferation of weapons of
mass destruction; terrorism; enforcement of international
sanctions; protection of U.S. economic and trade interests;
and political and economic developments abroad.

   +    U.S. intelligence has been used to uncover unfair
trade practices (as determined by U.S. law and custom) of
other nations whose industries compete with U.S. businesses,
and has helped the U.S. government to ensure the preservation
of a level economic playing field. According to the NSA, the
economic benefits of SIGINT contributions to U.S. industry
taken as a whole have totaled tens of billions of dollars over
the last several years.

   +    In sanctions-monitoring and enforcement, intelligence
intercepts of Serbian communications are reported to have been
the first indication for U.S. authorities that an F-16 pilot
enforcing a no-fly zone over Serbia and shot down in June 1995
was in fact alive,(32) and an important element in his rescue.
If the pilot had indeed been captured, U.S. options in Serbia
could have been greatly constrained.

   +    SIGINT that has been made public or that has been
tacitly acknowledged includes information about the shoot-down
of the Korean airliner KAL 007 on September 1, 1983, and the
bombing of La Belle Discotheque in West Berlin ordered by
Libya in April 1986.

   +    In foreign policy, accurate and timely intelligence
has been, and remains vital to, U.S. efforts to avert
conflicts between nations.

   +    In September 1988, President Ronald Reagan made the
decision to disclose NSA decrypts of Iraqi military
communications "to prove that, despite their denials, Iraqi
armed forces had used poison gas against the Kurds."(33)

   The information provided by SIGINT has helped to produce
information on weapons proliferation, providing indications of
violations of treaties or embargo requirements. SIGINT has
collected information on international terrorism and foreign
drug trafficking, thereby assisting in the detection of drug
shipments intended for delivery to the United States.

   Similarly, such information will continue to be a source of
important economic intelligence.

   In conducting these intelligence-gathering operations, a
wide variety of sources may be targeted, including the
communications of governments, nongovernment institutions, and
individuals. For example, banking is an international
enterprise, and the U.S. government may need to know about
flows of money for purposes of counter-terrorism or sanctions
monitoring.

   Although the value of SIGINT to military operations and to
law enforcement is generally unquestioned, senior decision
makers have a wide range of opinions on the value of strategic
and/or political intelligence. Some decision makers are
voracious consumers of intelligence reports. They believe that
the reports they receive provide advance notice of another
party's plans and intentions, and that their own decisions are
better for having such information. These decision makers find
that almost no amount of information is too much, and any
given piece of information has the potential to be helpful.

   To illustrate the value of SIGINT to some senior policy
makers, it is helpful to recall President Clinton's remarks to
the intelligence community on July 14, 1995, at the CIA: he
said that "in recent months alone you warned us when Iraq
massed its troops against the Kuwaiti border. You provided
vital support to our peacekeeping and humanitarian missions in
Haiti and Rwanda. You helped to strike a blow at a Colombian
drug cartel. You uncovered bribes that would have cheated
American companies out of billions of dollars." On a previous
occasion, then-President George Bush gave his evaluation of
SIGINT when he said that "... over the years I've come to
appreciate more and more the full value of SIGINT. As
President and Commander-in-Chief, I can assure you, signals
intelligence is a prime factor in the decision making process
by which we chart the course of this nation's foreign
affairs."(34)

   Some policy makers, generally less senior than the
President, have stated that while intelligence reports are
occasionally helpful, they do not in general add much to their
decision-making ability because they contribute to information
overload, are not sufficiently timely in the sense that the
information is revealed shortly in any event, lack necessary
context-setting information, or do not provide much
information beyond that available from open sources. Even
among the members of the committee who have served in senior
government positions, this range of opinion is
represented.(35)

   The perceived value of strategic SIGINT (as with many other
types of intelligence) depends largely on the judgment and
position of the particular individuals whom the intelligence
community is serving. These individuals change over time as
administrations come and go, but intelligence capabilities are
built up over a time scale longer than the election cycle. The
result is that the intelligence community gears itself to
serve those decision makers who will demand the most from it,
and is loath to surrender sources and/or capabilities that may
prove useful to decision makers.

   Since the benefits of strategic intelligence are so
subjective, formal cost-benefit analysis caImot be used to
justify a given level of support for intelligence. Rather,
intelligence tends to be supported on a "level-of-effort"
basis, that is, a political judgment about what is
"reasonable," given other defense and nondefense pressures on
the overall national budget.

----------

   (28) Center for Cryptologic History, National Security
Agency, *Introductory History of VENONA and Guide to the
Translations*, Fort George G. Meade, Maryland, undated. VENONA
material is also available from the Web site of the National
Security Agency at
http://www.nsa.gov:8080/docs/venona/venona.html.

   (29) Stephen Schlesinger, "Cryptanalysis for Peacetime:
Codebreaking and the Birth and Structure of the United
Nations," *Cryptologia*, Volume 19(3), July 1995, pp. 217-235.

   (30) Stansfield Turner, "Intelligence for a New World
Order," *Foreign Affairs*, Fall 1991, pp. 150-166.

   (31) Turner, "Intelligence for a New World Order," 1991,
pp. 150-166.

   (32) Daniel Williams, "'I'm Ready to Get the Hell Out of
Here,"' *Washington Post*, July 9, 1995, p. A-1.

   (33) Christopher Andrew, *For the President's Eyes Only*,
HarperCollins, New York, 1995.

   (34) *Public Papers of the Presidents*, U.S. Government
Printing, Office, Washington, D.C., 1991, as quoted by Andrew
in *For the President's Eyes Only*, 1995, p. 526.

   (35) For an open-source report on the value of intelligence
as perceived by different policy makers, see David E. Sanger,
"Emerging Role for the C.l.A.: Economic Spy," *New York
Times*, October 15, 1995, p. 1; David E. Sanger, "When Spies
Look Out for the Almighty Buck," *New York Times*, October 22,
1995, p. 4.

____________________________________________________________


          3.3.2 The Impact of Cryptography on SIGINT

   Cryptography poses a threat to SIGINT for two separate but
related reasons:

   +    Strong cryptography can prevent any given message from
being read or understood. Strong cryptography used primarily
by foreign governments with the discipline to use those
products on a regular and consistent basis presents the United
States with a formidable challenge. Some encrypted traffic
regularly intercepted by the United States is simply
undecipherable by any known means.

   +    Even weak cryptography, if practiced on a widespread
basis by foreign governments or other entities, increases the
cost of exploitation dramatically.(36) When most messages that
are intercepted are unencrypted, the cost to determine whether
an individual message is interesting is quite low. However, if
most intercepted messages are encrypted, each one has to be
cryptanalyzed individually, because the interceptor does not
know if it is interesting or not.(37)

   According to administration officials who testified to the
committee, the acquisition and proper use of cryptography by
a foreign adversary could impair the national security
interests of the United States in a number of ways:

   +    Cryptography used by adversaries on a wide scale would
significantly increase the cost and difficulty of intelligence
gathering across the full range of U.S. national security
interests.

   +    Cryptography used by governments and foreign companies
can increase an adversary's capability to conceal the
development of missile delivery systems and weapons of mass
destruction.

   +    Cryptography can improve the ability of an adversary
to maintain the secrecy of its military operations to the
detriment of U.S. or allied military forces that might be
similarly engaged.

   The above comments suggest that the deployment of strong
cryptography that is widely used will diminish the
capabilities of those responsible for SIGINT. Today, there is
a noticable trend toward better and cheaper encryption that is
steadily closing the window of exploitation of unencrypted
communications. The growth of strong encryption will reduce
the availability of such intelligence. Using capabilities and
techniques developed during the Cold War, the SIGINT system
will continue its efforts to collect against countries and
other entities newly hostile to the United States. Many
governments and parties in those nations, however, will be
potential customers for advanced cryptography as it becomes
available on world markets. In the absence of improved
cryptanalytic methods, cooperative arrangements with foreign
governments, and new ways of approaching the information
collection problem, it is likely that losses in traditional
SIGINT capability would result in a diminished effectiveness
of the U.S. intelligence community.

----------

   (36) This point is echoed in Susan Landau et al., *Codes,
Keys, and Conflicts: Issues in U.S. Crypto Policy*, 1994, p.
25.

   (37) For example, assume that 1 out of every 1,000 messages
is interesting, and the cost of intercepting a message is X
and the cost of decrypting a message is Y. Thus, each
interesting message is acquired at a cost of 1,000 X + Y.
However, if every message is encrypted, the cost of each
interesting message is 1,000 (X + Y), which is approximately
1,000 Y larger. In other words, the cryptanalyst must do 1,000
times more work for each interesting message.

____________________________________________________________


             3.4 SIMILARITIES IN AND DIFFERENCES
          BETWEEN FOREIGN POLICY/NATIONAL SECURITY
                  AND LAW ENFORCEMENT NEEDS
                FOR COMMUNICATIONS MONITORING


   It is instructive to consider the similarities in and
differences between national security and law enforcement
needs for communications monitoring.


                     3.4. 1 Similarities

   +    *Secrecy*. Both foreign policy and law enforcement
authorities regard surreptitiously intercepted communications
as a more reliable source than information produced through
other means. Surveillance targets usually believe (however
falsely) that their communications are private; therefore,
eavesdropping must be surreptitious and the secrecy of
monitoring maintained. Thus, the identity and/or nature of
specific SIGINT sources are generally very sensitive pieces of
information, and are divulged only for good cause.

   +    *Timeliness*. For support of tactical operations,
near-real-time information may be needed (e.g., when a crime
or terrorist operation is imminent, when hostile forces are
about to be engaged).

   +    *Resources available to targets*. Many parties
targeted for electronic surveillance for foreign policy
reasons or by law enforcement authorities lack the resources
to develop their own security products, and are most likely to
use what they can purchase on the commercial market.

   +    *Allocation of resources for collection*. The size of
the budget allocated to law enforcement and to the U.S.
intelligence community is not unlimited. Available resources
constrain both the amount of surveillance law enforcement
officials can undertake and the ability of the U.S. SIGINT
system to respond to the full range of national intelligence
requirements levied upon it.

        --     Electronic surveillance, although in many cases
        critical, is only one of the tools available to U.S.
        law enforcement. Because it is manpower intensive, it
        is a tool used sparingly; thus, it represents a
        relatively small percentage of the total investment.
        The average cost of a wiretap order is $57,000 (see
        Appendix D) or approximately one-half of a
        full-time-equivalent agent-year.

        --     The U.S. SIGINT system is a major contributor
        to the overall U.S. intelligence collection capability
        and represents a correspondingly large percentage of
        the foreign intelligence budget. Although large, the
        U.S. system is by no means funded to "vacuum clean"
        the world's communications. It is sized to gather the
        most potentially lucrative foreign signals and
        targeted very selectively to collect and analyze only
        those communications most likely to yield information
        relating to highest priority intelligence needs.

   +    Perceptions of the problem. The volume of electronic
traffic and the use of encryption are both expected to grow,
but how the growth of one will compare to that of the other is
unclear at present. If the overall growth in the volume of
unencrypted electronic traffic lags the growth in the use of
cryptography, those conducting surveillance for law
enforcement or foreign policy reasons may perceive a loss in
access because the fraction of intercepts available to them
will decrease, even if the absolute amount of information
intercepted has increased as the result of larger volumes of
information. Of course, if the communicating parties take
special care to encrypt their sensitive communications, the
absolute amount of useful information intercepted may decrease
as well.


                      3.4.2 Differences

   +    *Protection of sources*. While the distinction is not
hard and fast, law enforcement authorities conducting an
electronic surveillance are generally seeking specific items
of evidence that relate to a criminal act and that can be
presented in open court, which implies that the source of such
information (i.e., the wiretap) will be revealed (and possibly
challenged for legal validity). By contrast, national security
authorities are usually seeking a body of intelligence
information over a longer period of time and are therefore far
more concerned with preserving the secrecy of sources and
methods.

   +    *Definition of interests*. There is a consensus,
expressed in law, about the specific types of domestic crimes
that may be investigated through the use of wiretapping. Even
internationally, there is some degree of consensus about what
activities are criminal; the existence of this consensus
enables a considerable amount of law enforcement cooperation
on a variety of matters. National security interests are
defined differently and are subject to refinement in a
changing world, and security interests often vary from nation
to nation. However, a community of interest among NATO allies
and between the United States and the major nations of the
free world makes possible fruitful intelligence relationships,
even though the United States may at times target a nation
that is both ally and competitor.

   +    *Volume of potentially relevant communications*. The
volume of communications of interest to law enforcement
authorities is small compared to the volume of interest to
national security authorities.

   +    *Legal framework*. Domestic law enforcement
authorities are bound by constitutional protections and
legislation that limit their ability to conduct electronic
surveillance. National security authorities operate under far
fewer legal constraints in monitoring the communications of
foreign parties located outside the United States.

   +    *Perceptions of vulnerability to surveillance*.
Parties targeted by national security authorities are far more
likely to take steps to protect their communications than are
most criminals.



            3.5 BUSINESS AND INDIVIDUAL NEEDS FOR
         EXCEPTIONAL ACCESS TO PROTECTED INFORMATION


   As noted above in Section 3.1, an employer may need access
to data that has been encrypted by an employee. Corporations
that use cryptography for confidentiality must always be
concerned with the risk that keys will be lost, corrupted,
required in some emergency situation, or otherwise be
unavailable, and they have a valid interest in defending their
interests in the face of these eventualities.(38)
             Cryptography can present problems for companies attempting
to satisfy their legitimate business interests in access to
stored and communicated information:

   +    *Stored data*. For entirely legitimate business
reasons, an employee might encrypt business records, but due
to circumstances such as vacation or sick leave, the employer
might need to read the contents of these records without the
employee's immediate assistance. Then again, an employee might
simply forget the relevant password to an encrypted file, or
an employee might maliciously refuse to provide the key (e.g.,
if he has a grudge against his employer), or might keep
records that are related to improper activities but encrypt
them to keep them private; a business undertaking an audit to
uncover or investigate these activities might well need to
read these records without the assistance of the employee. For
example, in a dispute over alleged wrongdoing of his
superiors, a Washington, D.C., financial analyst changed the
password on the city's computer and refused to share it.(39)
In another incident, the former chief financial officer of an
insurance company, Golden Eagle Group Ltd, installed a
password known only to himself and froze out operations. He
demanded a personal computer that he claimed was his, his
final paycheck, a letter of reference, and a $100 fee --
presumably for revealing the password.(40) While technical
fixes for these problems are relatively easy, they do
demonstrate the existence of motivation to undertake such
actions. Furthermore, it is poor management practice that
allows a single employee to control critical data, but that is
beyond the scope of this study.

   +    Communications. A number of corporations provided
input to the committee indicating that for entirely legitimate
business reasons (e.g., for resolution of a dispute between
the corporation and a customer), an employer might need to
learn about the content of an employee's communications.
Alternatively, an employee might use company communications
facilities as a means for conducting improper activities
(e.g., leaking company-confidential information, stealing
corporate assets, engaging in kickback or fraud schemes,
inappropriately favoring one supplier over another). A
business undertaking an audit to uncover or investigate these
activities might well need to monitor these communications
without the consent of the employee (Box 3.1)(41) but would be
unable to do so if the communications were encrypted. In other
instances, a comparly might wish to assist law enforcement
officials in investigating information crimes against it(42)
but would not be able to do so if it could not obtain access
to unsanctioned employee-encrypted files or communications.
Many, though certainly not all, businesses require prospective
employees to agree as a condition of employment that their
communications are subject to employer monitoring under
various circumstallces.(43)

   It is a generally held view among businesses that
provisions for corporate exceptional access to stored data are
more important than such provisions for communications.(44)
For individuals, the distinction is even sharper. Private
individuals as well as businesses have a need to retrieve
encrypted data that is stored and for which they may have lost
or forgotten the key. For example, a person may have lost the
key to an encrypted will or financial statement and wish to
retrieve the data. However, it is much more difficult to
imagine circumstances under which a person might have a
legitimate need for the real-time monitoring of
communications.

----------

   (38) While users may lose or corrupt keys used for user
authentication, the procedures needed in this event are
different than if the keys in question are for encryption. For
example, a lost authentication key creates a need to revoke
the key, so that another party that comes into possession of
the authentication key cannot impersonate the original owner.
By contrast, an encryption key that is lost creates a need to
recover the key.

   (39) Peter G. Neumann, *Computer-Related Risks*,
Addison-Wesley, New York, 1995, p. 154.

   (40) Neumann, *Computer-Related Risks*, 1995, p. 154.

   (41) For example, employees with Internet access may spend
so much time on nonwork-related Internet activities that their
productivity is impaired. Concerns such problems have about
led some companies to monitor the Internet activities of their
employees, and spawned products that covertly monitor and
record Internet use. See Laurie Flynn, "Finding On-Line
Distractions, Employers Strive to Keep Workers in Line," *New
York Times*, November 6, 1995, p. D-5.

   (42) A number of examples of such cooperation can be found
in Peter Schweizer, *Friendly Spies*, The Atlantic Monthly
Press, New York, 1993.

   (43) The legal ramifications of employer access to
on-thejob communications of employees are interesting, though
outside the scope of this report. For example, a company
employee may communicate with another company employee using
cryptography that denies employer access to the content of
those communications; such use may be contrary to explicit
company policy. May an employee who has violated company
policy in this manner be discharged legally? In general,
employer access to on-thejob communications raises many issues
of ethics and privacy, even if such access is explicitly
permitted by contract or policy.

   (44) This distinction becomes somewhat fuzzy when
considering technologies such as e-mail that serve the purpose
of communications but that also involve data storage. Greater
clarity is possible if one distinguishes between the
electronic bits of a message in transit (e.g., on a wire) and
the same bits that are at rest (e.g., in a file). With e-mail,
the message is sent and then stored; thus, e-mail can be
regarded as a stored communication. These comments suggest
that a need for exceptional access to e-mail is much more
similar to that for storage than for communications, because
it is much more likely that a need will arise to read an e-
mail message after it has been stored than while it is in
transit. A likely scenario of exceptional access to email is
that a user may receive e-mail encrypted with a public key for
which he no longer has the corresponding private key (that
would enable him to decrypt incoming messages). While this
user could in principle contact the senders and inform them of
a new public key, an alternative would be to develop a system
that would permit him to obtain exceptional access without
requiring such actions.

____________________________________________________________


            3.6 OTHER TYPES OF EXCEPTIONAL ACCESS
                  TO PROTECTED INFORMATION


   The discussion of exceptional access above involves only
the question of encryption for confidentiality. While it is
possible to imagine legitimate needs for exceptional access to
encrypted data (for purposes of ensuring secrecy), it is
nearly impossible to imagine a legitimate need for exceptional
access to cryptography used for the purposes of user
authentication, data integrity, or nonrepudiation. In a
business context, these cryptographic capabilities implement
or support long-standing legal precepts that are essential to
the conduct of commerce.

   +    Without unforgeable digital signatures, the concept of
a binding contract is seriously weakened.

   +    Without trusted digitally notarized documents,
questions of time precedence might not be legally resolvable.

   +    Without unforgeable integrity checks, the notion of a
certifiably accurate and authentic copy of digital documents
is empty.

   +    Without strong authentication and unquestionable
nonrepudiation, the analog of registered delivery in postal
systems is open to suspicion.(45)

   With exceptional access to the cryptography implementing
such features or to the private keys associated with them, the
legal protection that such features are intended to provide
might well be called into question. At a minimum, there would
likely be a questioning of the validity or integrity of the
protective safeguards, and there might be grounds for legal
challenge. A businessperson might have to demonstrate, for
example, that he has properly and adequately protected the
private keys used to digitally sign his contracts to the
satisfaction of a court or jury.

   It is conceivable that the government, for national
security purposes, might seek exceptional access to such
capabilities for offensive information warfare (see Chapter
2); however, public policy should not promote these
capabilities, because such access could well undermine public
confidence in such cryptographic mechanisms.

---------

   (45) In fact, digital signatures and nonrepudiation provide
a stronger guarantee than does registered delivery; the former
can be used to assure the delivery of the contents of an
"envelope," whereas postal registered delivery can only be
used to assure the delivery of the envelope.

____________________________________________________________


                          3.7 RECAP


   In general, cryptography for confidentiality involves a
party undertaking an encryption (to protect information by
generating ciphertext from plaintext) and a party authorized
by the encryptor to decrypt the ciphertext and thus recover
the original plaintext. In the case of information that is
communicated, these parties are in general different
individuals. In the case of information that is stored, the
first party and the second party are in general the same
individual. However, circumstances can and do arise in which
third parties (i.e., decrypting parties that are not
originally authorized or intended by the encrypting party to
recover the information involved) may need access to such
information. These needs for exceptional access to encrypted
information may arise from businesses, individuals, law
enforcement, and national security, and these needs are
different depending on the parties in question. Encryption
that renders such information confidential threatens the
ability of these third parties to obtain the necessary access.

   How the needs for confidentiality and exceptional access
are reconciled in a policy context is the subject of Part II.

____________________________________________________________

           BOX 3.1 Examples of Business Needs for
            Exceptional Access to Communications

   +    A major Fortune 1000 corporation was the subject of
various articles in the relevant trade press. These articles
described conditions within the corporation (e.g., employee
morale) that were based on information supplied by employees
of this corporation acting in an unauthorized manner and
contrary to company policy; moreover, these articles were
regarded by corporate management as being highly embarrassing
to the company. The employees responsible were identified
through a review of tapes of all their telephone conversations
in the period immediately preceding publication of the
damaging articles, and were summarily dismissed. As a
condition of employment, these employees had given their
employer permission to record their telephone calls.

   +    Executives at a major Fortune 1000 corporation had
made certain accommodations in settling the accounts of a
particular client that, while legal, materially distorted an
accounting audit of the books of that client. A review of the
telephone conversations in the relevant period indicated that
these executives had done so knowingly, and they were
dismissed. As a condition of employment, these executives had
given their employer permission to record their telephone
calls.

   +    Attempting to resolve a dispute about the specific
terms of a contract to sell oil at a particular price, a
multinational oil company needed to obtain all relevant
records. Given the fact that oil prices fluctuate
significantly on a minute-by-minute basis, most such trades
are conducted and agreed to by telephone. All such calls are
recorded, in accordance with contracts signed by traders as a
condition of employment. Review of these voice records
provided sufficient information to resolve the dispute.

   +    A multinational company was notified by a law
enforcement agency in Nation A regarding its suspicions that
an employee of the company was committing fraud against the
company. This employee was a national of Nation B. The company
began an investigation of this individual in cooperation with
law enforcement authorities in Nation B, and in due course,
legal authorization for a wiretap on this individual using
company facilities was obtained. The company cooperated with
these law enforcement authorities in the installation of the
wiretap.

----------

SOURCE: Anonymous testimony to the committee.

____________________________________________________________

        BOX 3.2 Examples of the Utility of Wiretapping

   +    The El Rukn Gang in Chicago, acting as a surrogate for
the Libyan government and in support of terrorism, planned to
shoot down a commercial airliner within the United States
using a stolen military weapon. This act of terrorism was
prevented through the use of telephone wiretaps.

   +    The 1988 "Ill Wind" public corruption and defense
department fraud investigation relied heavily on court-ordered
telephone wiretaps. To date, this investigation has resulted
in the conviction of 65 individuals and more than a quarter of
a billion dollars in fines, restitutions, and recoveries.

   +    Numerous drug trafficking and money laundering
investigations, such as the "Polar Cap" and "Pizza Connection"
cases, utilized extensive telephone wiretaps in the successful
prosecution of large-scale national and international drug
trafficking organizations. "Polar Cap" resulted in the arrest
of 33 subjects and the recovery of $50 million in assets
seized. Additionally, in a 1992 Miami raid, which directly
resulted from wiretaps, agents confiscated 15,000 pounds of
cocaine and arrested 22 subjects.

   +    The investigation of convicted spy Aldrich Ames relied
heavily on wiretaps ordered under FISA authority.

   +    In a 1990 "Sexual Exploitation of Children"
investigation, the FBI relied heavily on wiretaps to prevent
violent individuals from abducting, torturing, and murdering
a child in order to make a "Snuff Murder" film.

----------

SOURCE: Federal Bureau of Investigation.

____________________________________________________________

            BOX 3.3 Law Enforcement Requirements
      for the Surveillance of Electronic Communications

   +    Prompt and expeditious access both to the contents of
the electronic communications and "setup" information
necessary to identify the calling and called parties.

   +    Real-time, full-time monitoring capability for
intercepts. Such capability is particularly important in an
operational context, in which conversations among either
criminal conspirators (e.g., regarding a decision to take some
terrorist action) or criminals and innocent third parties
(e.g.. regarding a purchase order for explosives from a
legitimate dealer) may have immediate significance.

   +    Delivery of intercepted communications to specified
monitoring facilities.

   +    Transparent access to the communications, i.e., access
that is undetectable to all parties to the communications
(except to the monitoring parties) and implementation of
safeguards to restrict access to intercept information.

   +    Verification that the intercepted communications are
associated with the intercept subject.

   +    Capabilities for some number of simultaneous
intercepts to be determined through a cooperative industry/law
enforcement effort.

   +    Reliability of the services supporting the intercept
at the same (or higher) level of the reliability of the
communication services provided to the intercept subject.

   +    A quality of service for the intercept that complies
with the performance standards of the service providers.

----------

SOURCE: Law Enforcement Requirements for the Surveillance of
Electronic Communications*, FBI in cooperation with the
National Technical Investigators Association, June 1994.

____________________________________________________________

          BOX 3.4 How Noncryptography Applications
   of Information Technology Could Benefit Law Enforcement

   As acknowledged elsewhere in the main text, encryption in
ubiquitous use would create certain difficulties for law
enforcement. Nevertheless, it is important to place into
context the overall impact on law enforcement of the digital
information technologies that enable encryption and other
capabilities that are not the primary subject of this report.
Chapter 2 suggested how encryption capabilities can be a
positive force for more effective law enforcement (e.g.,
secure police communications). But information technology is
increasingly ubiquitous and could appear in a variety of other
applications less obvious than encryption. For example:

   +    Video technology has become increasingly inexpensive.
Thus, it is easy to imagine police cruisers with video cameras
that are activated upon request when police are responding to
an emergency call. Monitoring those cameras at police
headquarters would provide a method for obtaining timely
information regarding the need of the responding officers for
backup. Equipping individual police officers with even smaller
video cameras attached to their uniforms and recording such
transmissions would provide objective evidence to corroborate
(or refute) an officer's description of what he saw at a crime
scene.

   +    The number of users of cellular telephones and
wide-area wireless communications services will grow rapidly.
As such technologies enable private citizens to act as
responsible eyes and ears that observe and report emergencies
in progress, law enforcement officials will be able to respond
more quickly. (See, for example, Chana Schoenberger, "The
Pocket-Size Protector; Feeling Safe, not Stylish, with
Cellular Phones," *Washington Post*, August 29, 1995, page
B-5.)

   +    Electronically mediated sting operations help to
preserve cover stories of law enforcement officials. For
example, the Cybersnare sting operation resulted in the arrest
of six individuals who allegedly stole cellular telephone
numbers en masse from major companies, resulting in millions
of dollars of industry losses. Cybersnare was based on an
underground bulletin board that appealed to cellular telephone
and credit card thieves. Messages were posted offering for
sale cellular telephone "cloning" equipment and stolen
cellular telephone numbers, and included contact telephone
numbers that were traced to the individuals in question. (See
Gautam Naik, "Secret Service Agents Arrest Six Hackers in
Cellular-Phone Sting in Cyberspace," *Wall Street Journal*,
September 12, 1995, page B6.)

   +    The locations of automobiles over a metropolitan area
could be tracked automatically, either passively or actively.
An active technique might rely on a coded beacon that would
localize the position of the automobile on which it was
mounted. A passive technique might rely on automatic scanning
for license plates that were mounted on the roofs of cars. As
an investigative technique, the ability to track the location
of a particular automobile over a period of time could be
particularly important.

   Even today, information technology enables law enforcement
officials to conduct instant background checks for handgun
purchases and arrest records when a person is stopped for a
traffic violation. Retail merchants guard against fraud by
using information technology to check driving records when
cars are rented and credit checks for big purchases. The
Department of the Treasury uses sophisticated information
technology to detect suspicious patterns that might indicate
large-scale money laundering by organized crime.

   All such possibilities involve important social as well as
technical issues. For example, the first two examples featured
above seem relatively benign, while the last two raises
serious entrapment and privacy issues. Even the "instant
background checks" of gun buyers have generated controversy.
The mention of these applications (potential and actual) is
not meant as endorsement, recommendation, or even suggestion;
they do, however, place into better context the potentialities
of information technology in some overall sense to improve the
capabilities of law enforcement while at the same time
illustrating that concerns about excessive government power
are not limited to the issue of cryptography.

____________________________________________________________

[End Chapter 3]




[Head note all pages: May 30, 1996, Prepublication Copy
Subject to Further Editorial Correction]


                           Part II

                     Policy Instruments


   To the best of the committee's knowledge, the goals of U.S.
cryptography policy have not been explicitly formalized and
articulated within the government. However, senior government
officials have indicated that U.S. cryptography policy seeks
to promote the following objectives:

   +    Deployment of encryption adequate and strong enough to
protect electronic commerce that may be transacted on the
future information infrastructure;

   +    Development and adoption of global (rather than
national) standards and solutions;

   +    Widespread deployment of capabilities into products
with encryption capabilities for confidentiality that enables
legal access for law enforcement and national security
purposes; and

   +    Avoidance of the development of de facto cryptography
standards (either domestically or globally) that do not permit
access for law enforcement and national security purposes,
thus ensuring that the use of such products remains relatively
limited.

   Many analysts believe that these goals are irreconcilable.
To the extent that this is so, the U.S. government is thus
faced with a policy problem requiring a compromise among these
goals that is tolerable, though by assumption not ideal with
respect to any individual goal. Such has always been the case
with many issues that generate social controversy -- balancing
product safety against the undesirability of burdensome
regulation on product vendors, public health against the
rights of individuals to refuse medical treatment, and so on.

   As this report is being written, U.S. cryptography policy
is still evolving, and the particular laws, regulations, and
other levers that govermnent uses to influence behavior and
policy are under review or being developed.

   Chapter 4 is devoted to the subject of export controls,
which dominate industry concerns about national cryptography
policy. Many senior executives in the information technology
industry perceive these controls as a major limitation on
their ability to export products with encryption capabilities.
Furthermore, because exports of products with encryption
capabilities are governed by the regime applied to
technologies associated with munitions, reflecting the
importance of cryptography to national security, they are
generally subject to more stringent controls than exports of
other computer-related technologies.

   Chapter 5 addresses the subject of escrowed encryption.
Escrowed eneryption is a form of encryption intended to
provide strong protection for legitimate uses but also to
permit exceptional access by government officials, by
corporate employers, or by end users under specified
circumstances. Since 1993, the Clinton Administation has
aggressively promoted escrowed encryption as a basic pillar of
national cryptography policy. Public concerns about escrowed
encryption have focused on the possibilities for failure in
the mechanisms intended to prevent improper access to
encrypted information, leading to losses of confidentiality.

   Chapter 6 addresses a variety of other aspect of national
cryptography policy and public concerns that these aspects
have raised.

____________________________________________________________


                              4

                       Export Controls


   Export controls on cryptography and related technical data
have been a pillar of national cryptography policy for many
years. Increasingly, they have generated controversy because
they pit the needs of national security to conduct signals
intelligence against the information security needs of
legitimate U.S. businesses and the markets of U.S.
manufacturers whose products might meet these needs. Chapter
4 describes the current state of export controls on
cryptography and issues that these controls raise, including
their effectiveness in achieving their stated objectives;
negative effects that the export control regime has on U.S.
businesses and U.S. vendors of information technology that
must be weighed against the positive effects of reducing the
use of cryptography abroad; the mismatch between vendor and
government perceptions of export controls; and various other
aspects of the export control process as it is experienced by
those subject to it.


      4.1 BRIEF DESCRIPTION OF CURRENT EXPORT CONTROLS

   Many advanced industrialized nations maintain controls on
exports of cryptography, including the United States. The
discussion below focuses on U.S. export controls; Appendix G
addresses foreign export control regimes on cryptography.


           4.1.1 The Rationale for Export Controls

   On the basis of discussion with senior government officials
and its own deliberations, the committee believes that the
current U.S. export control regime on products with encryption
capabilities for confidentiality is intended to serve two
primary purposes:

   +    To delay the spread of strong cryptographic
capabilities and the use of those capabilities throughout the
world. Senior intelligence officials recognize that in the
long run, the ability of intelligence agencies to engage in
signals intelligence will inevitably diminish due to a variety
of technological trends, including the greater use of
cryptography.(1)

   +    To give the U.S. government a tool for monitoring and
influencing the commercial development of cryptography. Since
any U.S. vendor that wishes to export a product with
encryption capabilities for confidentiality must approach the
U.S. government for permission to do so, the export license
approval process is an opportunity for the U.S. government to
learn in detail about the capabilities of such products.
Moreover, the results of the license approval process have
influenced the cryptography that is available on the
international market.

----------

   (1)  Although the committee came to this conclusion on its
own, it is consistent with that of the Office of Technology
Assessment, *Information Security and Privacy in Network
Environments*, Washington, D.C., September 1994.

____________________________________________________________


                4.1.2 General Description(2)

   Authority to regulate imports and exports of products with
cryptographic capabilities to and from the United States
derives from two items of legislation: the Arms Export Control
Act (AECA) of 1949 (intended to regulate munitions) and the
Export Administration Act (EAA; intended to regulate so-called
dual-use products(3)). The AECA is the legislative basis for
the International Traffic in Arms Regulations (ITAR), in which
the U.S. Munitions List (USML) is defined and specified. Items
on the USML are regarded for purposes of import and export as
munitions, and the ITAR are administered by the Department of
State. The EAA is the legislative basis for the Export
Administration Regulations (EAR), which define dual-use items
on a list known as the Commerce Control List (CCL)(4); the EAR
are administered by the Department of Commerce. The EAA lapsed
in 1994 but has been continued under executive order since
that time. Both the AECA and the EAA specify sanctions that
can be applied in the event that recipients of goods exported
from the United States fail to comply with all relevant
requirements, such as agreements to refrain from reexport (Box
4.1).

   At present, products with encryption capabilities can be
imported into the United States without restriction, although
the President does have statutory authority to regulate such
imports if appropriate. Exports are a different matter. Any
export of an item covered by the USML requires a specific
affirmative decision by the State Department's Office of
Defense Trade Controls, a process that can be time-consuming
and cumbersome from the perspective of the vendor and
prospective foreign purchaser.

   The ITAR regulate and control exports of all "cryptographic
systems, equipment, assemblies, modules, integrated circuits,
components or software with the capability of maintaining
secrecy or confidentiality of information or information
systems", in addition, they regulate information about
cryptography but not implemented in a product in a category
known as "technical data."(5)

   Until 1983, USML controls were maintained on all
cryptography products. However, since that time, a number of
relaxations in these controls have been implemented (Box 4.2),
although many critics contend that such relaxation has lagged
significantly behind the evolving marketplace. Today, the ITAR
provide a number of certain categorical exemptions that allow
for products in those categories to be regulated as dual-use
items and controlled exclusively by the CCL. For products that
do not fall into these categories and for which there is some
question about whether it is the USML or the CCL that governs
their export, the ITAR also provide for a procedure known as
commodity jurisdiction,(6) under which potential exporters can
obtain judgments from the State Department about which list
governs a specific product. A product granted commodity
jurisdiction to the CCL falls under the control of the EAR and
the Department of Commerce. Note that commodity jurisdiction
to the CCL is generally granted for products with encryption
capabilities using 40-bit keys regardless of the algorithm
used, although these decisions are made on a product-by-
product basis. In addition, when a case-by-case export
licensing decision results in CCL jurisdiction for a software
product, it is usually only the object code, which cannot be
modified easily, that is transferred, the source code of the
product (embedding the identical functionality but more easily
modified) generally remains on the USML.

   As described in Box 4.3, key differences between the USML
and the CCL have the effect that items on the CCL enjoy more
liberal export consideration than items on the USML. (This
report uses the term "liberal export consideration" to mean
treatment under the CCL.) Most importantly, a product
controlled by the CCL is reviewed only once by the U.S.
government, thus drastically simplifying the marketing and
sale of the product overseas.

   The most important of these explicit categorical exemptions
to the USML for cryptography are described in Box 4.4. In
addition, the current export control regime provides for an
individual case-by-case review of USML licensing applications
for products that do not fall under the jurisdiction of the
CCL. Under current practice, USML licenses to acquire and
export for internal use products with encryption capabilities
stronger than that provided by 40-bit RC2/RC4 encryption
(hereafter in this chapter called "strong encryption"(7)) are
generally granted to U.S.-controlled firms (i.e., U.S. firms
operating abroad, a U.S.-controlled foreign firms, or foreign
subsidiaries of a U.S. firm). In addition, banks and financial
institutions (including stock brokerages and insurance
companies), whether U.S.-controlled or owned or foreign-owned,
are generally granted USML licenses for strong cryptography
for use in internal communications and communications with
other banks even if these communications are not limited
strictly to banking or money transactions.

   In September 1994, the Administration promulgated
regulations that provided for U.S. vendors to distribute
approved products with encryption capabilities for
confidentiality directly from the United States to foreign
customers without using a foreign distributor and without
prior State Department approval for each export.(8) It also
announced plans to finalize a "personal use exemption" to
allow license-free temporary exports of products with
encryption capabilities when intended for personal use; a
final rule on the personal use exemption was announced in
early 1996 and is discussed below in Section 4.3.2. Lastly, it
announced a number of actions intended to streamline the
export control process to provide more rapid turnaround for
certain "preapproved" products.

   In August 1995, the Administration announced a proposal to
liberalize export controls on software products with
encryption capabilities for confidentiality that use
algorithms with a key space of 64 or fewer bits, provided that
the key(s) required to decrypt messages and files are
"properly escrowed"; such products would be transferred to the
CCL. However, since an understanding of this proposal requires
some background in escrowed encryption, discussion of it is
deferred to Chapter 5.

----------

   (2)  Two references that provide detailed descriptions of
the U.S. export control regime for products with encryption
capability are a memorandum by Fred Greguras of the law firm
Fenwick & West (Palo Alto, Calif.), dated March 6, 1995, and
titled "Update on Current Status of U.S. Export Administration
Regulations on Software" (available on
http://www.batnet.com/oikoumene/SftwareEU.html), and a paper
by Ira Rubenstein ("Export Controls on Encryption Software,"
in *Coping with U.S. Export Controls 1994*, October 18, 1995
(PLI Com. Law & Practice Course Handbook Series No. A-733,
1995).). The Greguras memorandum focuses primarily on the
requirements of products controlled by the Commerce Control
List, while the Rubenstein paper focuses primarily on how to
move a product from the Munitions List to the Commerce Control
List.

   (3)  A dual-use item is one that has both military and
civilian applications.

   (4)  The CCL is also commonly known as the Commodity
Control List.

   (5)  However, all encryption products intended for domestic
Canadian use in general do not require export licenses.

   (6)  Commodity jurisdiction is also often known by its
acronym, CJ.

   (7)  How much stronger than 40-bit RC2/RC4 is unspecified.
Products incorporating the 56-bit DES algorithm are often
approved for these informal exemptions, and at times even
products using larger key sizes have been approved. But the
key size is not unlimited, as may be the case under the
explicit categorical exemptions specified in the ITAR.

   (8)  Prior to this rule, almost every encryption export
required an individual license. Only those exports covered by
a distribution arrangement could be shipped without an
individual license. This distribution arrangement required a
U.S. vendor of products with cryptographic capabilities to
export to a foreign distributor that could then resell them to
multiple end users. The distribution arrangement had to be
approved by the State Department and included some specific
language. Under the new rule, a U.S. vendor without a foreign
distributor can essentially act as his own distributor, and
avoid having to obtain a separate license for each sale.
Exporters are required to submit a proposed arrangement
identifying, among other things, specific items to be shipped,
proposed end users and end use, and countries to which the
items are destined. Upon approval of the arrangement,
exporters are permitted to ship the specified products
directly to end users in the approved countries based on a
single license. See Bureau of Political-Military Affairs,
Department of State, "Amendment to the International Traffic
in Arms Regulations," *Federal Register*, September 2, 1994.

____________________________________________________________


       4.1.3 Discussion of Current Licensing Practices


The Categorical Exemptions

   The categorical exemptions described in Box 4.4 raise a
number of issues:

   +    In the case of the 40-bit limitation, the committee
was unable to find a specific analytical basis for this
figure. Most likely, it was the result of a set of compromises
that were politically driven by all of the parties
involved.(9) However, whatever the basis for this key size,
recent successful demonstrations of the ability to undertake
brute-force cryptanalysis on messages encrypted with a 40-bit
key (Box 4.5) have led to a widespread perception that such
key sizes are inadequate for meaningful information security.

   +    In the case of products intended for use only in
banking or money transactions, the exemption results from the
recognition by national security authorities that the
integrity of the world's financial system is worth protecting
with high levels of cryptographic security. Given the primacy
of the U.S. banking community in international financial
markets, such a conclusion makes eminent sense. Furthermore,
at the time this exemption was promulgated, the financial
community was the primary customer for products with
encryption capabilities.

   This rationale for protecting banking and money
transactions naturally calls attention to the possibilities
inherent in a world of electronic commerce, in which routine
communications will be increasingly likely to include
information related to financial transactions. Banks (and
retail shops, manufacturers, suppliers, end customers, and so
on) will engage in such communications across national
borders. In a future world of electronic commerce, connections
among nonfinancial institutions may become as important as the
banking networks are today. At least one vendor has been
granted authority to use strong encryption in software
intended for export that would support international
electronic commerce (though under the terms of the license,
strong encryption applies only to a small portion of the
transaction message).(10)

   +    In the case of products useful only for user
authentication, access control, and data integrity, the
exemption resulted from a judgment that the benefits of more
easily available technology for these purposes outweigh
whatever costs there might be to such availability. Thus, in
principle, these nonconfidentiality products from U.S. vendors
should be available overseas without significant restriction.

   In practice, however, this is not entirely the case. Export
restrictions on confidentiality have some "spillover" effects
that reduce somewhat the availability of products that are
intended primarily for authentication.(11)

   Another spillover effect arises from a desire among vendors
and users to build and use products that integrate multiple
cryptographic capabilities (for confidentiality and for
authentication/integrity) with general-purpose functionality.
In many instances, it is possible for cryptography for
authentication/integrity and cryptography for confidentiality
to draw on the same algorithm. Export control regulations may
require that a vendor weaken or even eliminate the encryption
capabilities of a product that also provides
authentication/integrity capabilities, with all of the
consequent costs for users and vendors (as described in
Section 4.3).

   Such spillover effects suggest that government actions that
discourage capabilities for confidentiality may also have some
negative impact on the development and use of products with
authentication/integrity capabilities even if there is no
direct prohibition or restriction on export of products with
capabilities only for the latter.


Informal Noncodified Practices

   As described above, it is current practice to grant USML
licenses for exports of strong cryptography to firms in a
number of categories described in Box 4.4. However, the fact
that this practice is not explicitiy codified contributes to
a sense of uncertainty among vendors and users about the
process and in practice leads to unnecessary delays in license
processing.

   In addition, there is uncertainty about whether or not a
given foreign company is "controlled" by a U.S. firm.
Specifically, vendors often do not know (and cannot find out
in advance) whether a proposed sale to a particular foreign
company falls under the protection of this unstated exemption.
As a practical rule, the U.S. government has a specific set of
guidelines that are used to make this determination.(12) But
these rules require considerable interpretation and thus do
not provide clear guidance for U.S. vendors.

   A third issue that arises with current practice is that the
lines between "foreign" and "U.S." companies are blurring in
an era of transnational corporations, ad hoc strategic
alliances, and close cooperation between suppliers and
customers of all types. For example, U.S. companies often team
with foreign companies in global or international ventures. It
would be desirable for U.S. products with encryption
capabilities to be used by both partners to conduct business
related to such alliances without requiring a specific export
licensing decision.(13)

   In some instances, USML licenses have granted U.S.
companies the authority to use strong encryption rather freely
(e.g., in the case of a U.S. company with worldwide
suppliers). But these licenses are still the result of a
lengthy case-by-case review whose outcome is uncertain.
Finally, the State Department and NSA explicitly assert
control over products without any cryptographic capability at
all but developed with "sockets," or more formally,
cryptographic applications programming interfaces into which
a user can insert his own cryptography. Such products are
regarded as having an inherent cryptographic capability
(although such capability is latent rather than manifest), and
as such are controlled by the USML, even though the text of
the ITAR does not mention these items explicitly.(14) In
general, vendors and users understand this to be the practice
and do not challenge it, but they dislike the fact that it is
not explicit.

----------

   (9)  It is worth noting a common argument among many
nongovernment observers that any level of encryption that
qualifies for export (e.g., that qualifies for control by the
CCL, or that is granted an export license under the USML) must
be easily defeatable by NSA, or else NSA would not allow it to
leave the country. The subtext of this argument is that such
a level of encryption is per force inadequate. Of course,
taken to its logical conclusion, this argument renders
impossible any agreement between national security authorities
and vendors and users regarding acceptable levels of
encryption for export.

   (10) "Export Approved for Software to Aid Commerce on
Internet," *New York Times*, May 8, 1995, p. D-7. " For
example, the Kerberos operating system is designed to provide
strong cryptographic authentication of users (and hence strong
access control for system resources). Typically, Kerberos is
distributed in the United States in source code through the
Internet to increase its usability on a wide range of
platforms, to accommodate diverse user needs, and to increase
maintainability; source code distribution is a common practice
on the Internet. However, since Kerberos uses the DES
algorithm as the cryptographic engine to support its
authentication features, the source code for Kerberos is
controlled under the USML and is not available through the
Internet to foreign end users. It is thus fair to say that
Kerberos is less used by foreign users than it might be if
there were no export controls on products with encryption
capabilities, even though the primary purpose of Kerberos is
authentication. Note that Kerberos is also designed with
operating system calls that support confidentiality. These
calls are stripped out of the exportable version of Kerberos,
which is only available in object form in any event.

   A second example was provided in testimony to the committee
from a company that had eliminated all cryptographic
capabilities from a certain product because of its perceptions
of the export control hurdles to be overcome. The capabilities
eliminated included those for authentication. While it can be
argued that the company was simply ignorant of the exemptions
in the ITAR for products providing authentication
capabilities, the fact remains that much of the vendor
community is either not familiar with the exemptions or does
not believe that they represent true "fast-track" or
"automatic" exceptions.

   (12) Under Defense Department guidelines for determining
foreign ownership, control, or influence (FOCI), a U.S.
company is considered under FOCI "whenever a foreign interest
has the power, direct or indirect, whether or not exercised,
and whether or not exercisable through the ownership of the
U.S. company's securities, by contractual arrangements or
other means, to direct or decide matters affecting the
management or operations of that company in a manner which may
result in unauthorized access to classified information or may
affect adversely the performance of classified contracts." A
FOCI determination for a given company is made on the basis of
a number of factors, including whether a foreign person
occupies a controlling or dominant minority position; the
identification of immediate, intermediate and ultimate parent
organizations. (See Department of Defense, *National
Industrial Security Program Operating Manual*, DOD-5220.22-M,
January 1995, pp. 2-3-1 to 2-3-2.) According to ITAR
Regulation 122.2, "ownership" means that more than 50 percent
of the outstanding voting securities of the firm are owned by
one or more foreign persons. "Control" means that one or more
foreign persons have the authority or ability to establish or
direct the general policies or day-to-day operations of the
firm. Control is presumed to exist where foreign persons own
25 percent or more of the outstanding voting securities if no
U.S. persons control an equal or larger percentage. The
standards for control specified in 22 CFR 60.2(c) also provide
guidance in determining whether control in fact exists.
Defense Department Form 4415, August 1990, requires answers to
11 questions in order for the Defense Department to make a
FOCI determination for any given company.

   (13) In one instance reported to the committee, a major
multinational company with customer support offices in China
experienced a break-in in which Chinese nationals apparently
copied paper documents and computer files. File encryption
would have mitigated the impact associated with this "bag
job." Then-current export restrictions hampered deployment of
encryption to this site because the site was owned by a
foreign (Chinese) company rather than a U.S.-controlled
company and therefore not easily covered under then-current
practice.

____________________________________________________________


    4.2 EFFECTIVENESS OF EXPORT CONTROLS ON CRYPTOGRAPHY


   One of the most contentious points in the debate over
export controls on cryptography concerns their effectiveness
in delaying the spread of strong cryptographic capabilities
and the use of those capabilities throughout the world.
Supporters of the current export control regime believe that
these controls have been effective, and they point to the fact
that encryption is not yet in widespread commercial use abroad
and that a significant fraction of the traffic intercepted
globally is unencrypted. Further, they argue that U.S.
products with encryption capabilities dominate the
international market to an extent that impeding the
distribution of U.S. products necessarily affects worldwide
usage. Critics of current policy assert that export controls
have not been effective in limiting the availability of
cryptography abroad. For example, based on its ongoing survey
of cryptography products worldwide (a study widely cited by
critics of current policy), Trusted Information Systems Inc.
has noted that:

   [w]e have now identified 1181 products worldwide [as of
   March 30, 1996], and we're continuing to learn about new
   products, both domestic and foreign, on a daily basis.
   We've also obtained numerous products from abroad and are
   examining these products to assess their functionality and
   security. The survey results show that cryptography is
   indeed widespread throughout the world. Export controls
   outside of the U.S. appear to be less restrictive. The
   quality of foreign products seems to be comparable to that
   of U.S. products.(15)

   Furthermore, critics of U.S. export controls argue that
sources other than U.S. commercial vendors (specifically
foreign vendors, the in-house expertise of foreign users,
Internet software downloads, and pirated U.S. software) are
capable of providing very good cryptography that is usable by
motivated foreign users.

   In assessing the arguments of both supporters and critics
of the current export control regime, it is important to keep
in mind that the ultimate goal of export controls on
cryptography is to keep strong cryptography out of the hands
of potential targets of signals intelligence. Set against this
goal, the committee believes that the arguments of both
supporters and critics have merit but require qualification.

   The supporters of the current export regime are right in
asserting that U.S. export controls have had a nontrivial
impact in retarding the use of cryptography worldwide. This
argument is based on three linked factors.

   +    U.S. export controls on cryptography have clearly
limited the sale of U.S. products with encryption capabilities
in foreign markets; indeed, it is this fact that drives the
primary objection of U.S. information technology vendors to
the current export control regime on cryptography.

   +    Very few foreign vendors offer integrated products
with encryption capabilities.(16) U.S. information technology
products enjoy a very high reputation for quality and
usability, and U.S. information technology vendors, especially
those in the mass-market software arena, have marketing and
distribution skills that are as yet unparalleled by their
foreign counterparts. As a result, foreign vendors have yet to
fill the void left by an absence of U.S. products.

   +    U.S. information technology products account for a
large fraction of global sales. For example, a recent U.S.
International Trade Commission staff report points out that
over half of all world sales in information technology come
from the United States.'' Actions that impede the flow of U.S.
products to foreign consumers are bound to have significant
effects on the rate at which those products are purchased and
used.

   On the other hand, it is also true that some foreign
targets of interest to the U.S. government today use
encryption that is for all practical purposes unbreakable;
major powers tend to use "home-grown" cryptography that they
procure on the same basis that the United States procures
cryptography for its own use, and export controls on U.S.
products clearly cannot prevent these powers from using such
cryptography.

   Furthermore, the fact that cryptography is not being widely
used abroad does not necessarily imply that export controls
are effective--or will be in the near future--in restraining
the use of cryptography by those who desire the protection it
can provide. The fact is that cryptography is not used widely
either in the United States or abroad, and so it is unclear
whether it is the lack of information security consciousness
described in Chapter 2 or the U.S. export control regime for
cryptography that is responsible for such non-use; most
probably, it is some combination of these two factors.

   The critics of the current export regime are right in
asserting that foreign suppliers of cryptography are many and
varied, that software products with encryption capabilities
are quite available through the Internet (probably hundreds of
thousands of individuals have the technical skill needed to
download such products), and that cryptography does pose
special difficulties for national authorities wishing to
control such technology (Box 4.6). Yet, most products with
encryption capabilities available on the Internet are not
integrated products; using security-specific products is
generally less convenient than using integrated products (as
described in Chapter 2), and because such products are used
less often, their existence and availability pose less of a
threat to the collection of signals intelligence.

   Furthermore, Internet products are, as a general rule,
minimally supported and do not have the backing of reputable
and established vendors.(18) Users who download software from
the Internet may or may not know exactly what code the product
contains and may not have the capability to test it to ensure
that it functions as described.(19) Corporate customers, the
primary driver for large-scale deployment of products, are
unlikely to rely on products that are not sold and supported
by reputable vendors, and it is products with a large
installed base (i.e., those created by major software vendors)
that would be more likely to have the high-quality encryption
that poses a threat to signals intelligence. Box 4.7 describes
the primary differences between commercial products and
"freeware" available on the Internet.

   The committee's brief survey of product literature
describing foreign stand-alone security-specific products with
encryption capabilities (Box 4.8) also indicated many
implementations that were unsound from a security standpoint,
even taking for granted the mathematical strength of the
algorithms involved and the proper implementation of the
indicated algorithms.(20) The committee has no reason to
believe that the stand-alone security-specific products with
encryption capabilities made by U.S. vendors are on average
better at providing security,(21) although the large
established software vendors in the United States do have
reputations for providing relatively high quality in their
products for features unrelated to security.(22) Without an
acceptable product certification service, most users have no
reliable way of determining the quality of any given product
for themselves.

   As a general rule, a potential user of cryptography faces
the choice of buying commercially available products with
encryption capabilities on the open market (perhaps
custom-made, perhaps produced for a mass market) or developing
and deploying those products independently. The arguments
discussed above suggest that global dissemination of knowledge
about cryptography makes independent development an option,
but the problems of implementing knowledge as a usable and
secure product drive many potential users to seek products
available from reputable vendors. In general, the greater the
resources available to potential users and the larger the
stakes involved, the more likely they are to attempt to
develop their own cryptographic resources. Thus, large
corporations and First World governments are, in general, more
likely than small corporations and Third World governments to
develop their own cryptographic implementations.

   Finally, the text of the ITAR seems to allow a number of
entirely legal actions that could have results that the
current export control regime is intended to prevent (see Box
4.9). For example, RSA Data Security Inc. has announced a
partnership with the Chinese government to fund an effort by
Chinese government scientists to develop new encryption
software. This software may be able to provide a higher degree
of confidentiality than software that qualifies today for
liberal export consideration under the CCL.(23)

----------

   (14) Specifically, the ITAR place on the USML
"cryptographic devices, software, and components specifically
designed or modified therefor, including: cryptographic
(including key management) systems, equipment, assemblies,
modules, integrated circuits, components or software with the
capability of maintaining secrecy or confidentiality of
information or information systems." Note that these
categories do not explicitly mention systems without
cryptography but with the capability of accepting "plug-in"
cryptography.

   (15) Available on line from the TIS home page,
http://www.tis.com; at the time of its presentation to the
committee, TIS had identified 450 such products available from
foreign nations. Testimony on this topic was first presented
by Steven Walker, president of Trusted Information Systems, to
the House Committee on Foreign Affairs, Subcommittee on
Economic Policy, Trade, and Environment, on October 12, 1993.
TIS briefed the committee on December 15, 1994, and July 19,
1995. The survey mentioned in testimony to the committee
continues, and regularly updated figures can be found on the
TIS Web page (http://www.tis.com/crypto-survey).

   (16) The Department of Commerce and the National Security
Agency found no general-purpose software products with
encryption capability from non-U.S. manufacturers. See
Department of Commerce and National Security Agency, *A Study
of the International Market for Computer Software with
Encryption*, January 11, 1996, p. 111-9.

   (17) Office of Industries, U.S. International Trade
Commission, *Global Competitiveness of the U.S. Computer
Software and Service Industries*. Staff Research Study #21,
Washington, D.C., June 1995, executive summary.

   (18) Whether major vendors will continue to avoid the
Internet as a distribution medium remains to be seen. Even
today, a number of important products, including Adobe's
Acrobat Reader, Microsoft's Word Viewer and Internet
Assistant, and the Netscape Navigator are distributed through
the Internet. Some vendors make products freely available in
limited functionality versions as an incentive for users to
obtain full-featured versions; others make software products
freely available to all takers in order to stimulate demand
for other products from that vendor for which customers pay.

   (19) Indeed, the lack of quality control for
Internet-available software provides an opportunity for those
objecting to the proliferation of good products with
encryption capability to flood the market with their own
products anonymously or pseudonymously; such products may
include features that grant clandestine access with little
effort.)

   (20) The committee's analysis of foreign stand-alone
products for cryptography was based on material provided to
the committee by TIS, which TIS had collected through its
survey. This material was limited to product brochures and
manuals, which the committee believes puts the best possible
face on a product's quality. Thus, the committee's
identification of security defects in these products is
plausibly regarded as a minimum estimate of their
weaknesses--more extensive testing (e.g., involving
disassembly) would be likely to reveal additional weaknesses,
since implementation defects would not be written up in a
product brochure. Moreover, the availability of a product
brochure does not ensure the availability of the corresponding
product; TIS has brochures for all of the 800-plus products
identified in its survey, but due to limited resources, it has
been able to obtain physical versions (e.g., a disk, a circuit
board) of fewer than 10 percent of the products described in
those brochures.

   (21) An "amateur" review of encryption for confidentiality
built into several popular U.S. mass-market software programs
noted that the encryption facilities did not provide
particularly good protection. The person who reviewed these
programs was not skilled in cryptography but was competent in
his understanding of programming and how the Macintosh manages
files. By using a few commonly available programming tools (a
file compare program, a "debugger" that allows the user to
trace the flow of how a program executes, and a "disassembler"
that turns object code into source code that can be examined),
the reviewer was able to access in less than two hours the
"protected" files generated by four out of eight programs. See
Gene Steinbert, "False Security,"* MACWORLD*, November 1995,
pp. 118-121.

   One well-publicized cryptographic security flaw found in
the Netscape Corporation's Navigator Web browser is discussed
in footnote 34 in Chapter 2. Because of a second flaw,
Netscape Navigator could also enable a sophisticated user to
damage information stored on the host computer to which
Navigator is connected. (See Jared Sandberg, "Netscape
Software for Cruising Internet Is Found to Have Another
Security Flaw," *Wall Street Journal*, September 25, 1995, p.
B-12.)

   (22) In addition, a product with a large installed base is
subject to a greater degree of critical examination than a
product with a small installed base, and hence flaws in the
former are more likely to be noticed and fixed. Large
installed bases are more characteristic for products produced
by established vendors than of freeware or shareware
producers.

   (23) See Don Clark, "China, U.S. Firm Challenge U.S. on
Encryption-Software Exports," *Wall Street Journal*, February
8, 1996, p. A-10.

____________________________________________________________


              4.3 THE IMPACT OF EXPORT CONTROLS
           ON U.S. INFORMATION TECHNOLOGY VENDORS


   U.S. export controls have a number of interrelated effects
on the economic health of U.S. vendors and on the level of
cryptographic protection available to U.S. firms operating
domestically. (The impact of foreign import controls on U.S.
vendors is discussed in Chapter 6 and Appendix G.)


                 4.3.1 De Facto Restrictions
         on the Domestic Availability of Cryptography

   Current law and policy place no formal restrictions
whatever on products with encryption capabilities that may be
sold or used in the United States. In principle, the domestic
market can already obtain any type of cryptography it wants.
For stand-alone security-specific products, this principle is
true in practice as well. But the largest markets are not for
stand-alone security-specific products, but rather for
integrated products with encryption capabilities.

   For integrated products with encryption capabilities,
export controls do have an effect on domestic availability.
For example,

   +    The Netscape Communications Corporation distributes a
version of Netscape Navigator over the Internet and sells a
version as shrink-wrapped software. Because the Internet
version can be downloaded from abroad, its encryption
capabilities are limited to those that will allow for liberal
export consideration, the shrink-wrapped version is under no
such limitation and in fact is capable of much higher levels
of encryption.(24) Because it is so much more convenient to
obtain, the Internet version of Netscape Navigator is much
more widely deployed in the United States than is the
shrink-wrapped version, with all of the consequences for
information security that its weaker encryption capability
implies.

   +    The Microsoft Corporation recently received permission
to ship Windows NT Version 4, a product that incorporates a
cryptographic applications programming interface approved by
the U.S. government for commodity jurisdiction to the CCL.
However, this product is being shipped worldwide with a
cryptographic module that provides encryption capabilities
using 40-bit RC4.25 While domestic users may replace the
default module with one providing stronger encryption
capabilities, many will not, and the result is a weaker
encryption capability for those users.

   +    A major U.S. software vendor distributes its major
product in modular form in such a way that the end user can
assemble a system configuration in accordance with local
needs. However, since the full range of USML export controls
on encryption is applied to modular products into which
cryptographic modules may be inserted, this vendor has not
been able to find a sensible business approach to distributing
the product in such a way that it would qualify for liberal
export consideration. The result has been that the encryption
capabilities provided to domestic users of this product are
much less than they would otherwise be in the absence of
export controls.

   What factors underlie the choices made by vendors that
result in the outcomes described above? At one level, the
examples above are simply the result of market decisions and
preferences. At a sufficiently high level of domestic market
demand, U.S. vendors would find it profitable and appropriate
to develop products for the domestic market alone. Similarly,
given a sufficiently large business opportunity in a foreign
country (or countries) that called for a product significantly
different from that used by domestic users, vendors would be
willing to develop a customized version of a product that
would meet export control requirements. Furthermore, many
other manufacturers of exportable products must cope with a
myriad of different requirements for export to different
nations (e.g., differing national standards for power, safety,
and electromagnetic interference), as well as differing
languages in which to write error messages or user manuals.
From this perspective, export controls are simply one more
cost of doing business outside the United States.

   On the other hand, the fact that export controls are an
additional cost of doing business outside the United States is
not an advantage for U.S. companies planning to export
products. A vendor incurs less expense and lower effort for a
single version of a product produced for both domestic and
foreign markets than it does when multiple versions are
involved. While the actual cost of developing two different
versions of a product with different key lengths and different
algorithms is relatively small, a much larger part of the
expense associated with multiple versions relates to
marketing, manufacture, support, and maintenance of multiple
product versions after the initial sale has been made.(26)

   Since a vendor may be unable to export a given product with
encryption capabilities to foreign markets, domestic market
opportunities must be that much greater to warrant a
domestic-only version. (Given that about half of all sales of
U.S. information technology vendors are made to foreign
customers, the loss of foreign markets can be quite damaging
to a U.S. vendor.(27)) When they are not, vendors have every
incentive to develop products with encryption capabilities
that would easily qualify for liberal export consideration. As
a result, the domestic availability of products with strong
encryption capability is diminished.

   While a sufficiently high level of domestic market demand
would make it profitable for U.S. vendors to develop products
for the domestic market alone, the "sufficiently" qualifier is
a strong one indeed, given the realities of the market into
which vendors must sell and compete, and one infrequently met
in practice.

   Users are also affected by an export control regime that
forces foreign and domestic parties in communication with each
other to use encryption systems based on different algorithms
and/or key lengths. In particular, an adversary attempting to
steal information will seek out the weakest point. If that
weakest point is abroad because of the weak cryptography
allowed for liberal export, then that is where the attack will
be. In businesses with worldwide network connections, it is
critical that security measures be taken abroad, even if key
information repositories and centers of activity are located
in the continental United States. Put differently, the use of
weak cryptography abroad means that sensitive information
communicated by U.S. businesses to foreign parties faces a
greater risk of compromise abroad because stronger
cryptography integrated into U.S. information technology is
not easily available abroad.

   Finally, the export licensing process can have a
significant impact on how a product is developed. For example,
until recently, products developed to permit the user to
substitute easily his own cryptography module were subject to
the USML and the ITAR.(28) One vendor pointed out to the
committee that its systems were designed to be assembled "out
of the box" by end users in a modular fashion, depending on
their needs and computing environment. This vendor believed
that such systems would be unlikely to obtain liberal export
consideration, because of the likelihood that a foreign user
would be able to replace an "export-approved" cryptography
module with a cryptography module that would not pass export
review. Under these circumstances, the sensible thing from the
export control perspective would be to deny exportability for
the modularized product even if its capabilities did fall
within the "safe harbor" provisions for products with
encryption capabilities.

   The considerations above led the committee to conclude that
U.S. export controls have had a negative impact on the
cryptographic strength of many integrated products with
encryption capabilities available in the United States.(29)
Export controls tend to drive major vendors to a "least common
denominator" cryptographic solution that will pass export
review as well as sell in the United States. The committee
also believes that export controls have had some impact on the
availability of cryptographic authentication capabilities
around the world. Export controls distort the global market
for cryptography, and the product decisions of vendors that
might be made in one way in the absence of export controls
may well be made another way in their presence.

   Some of the reasons for this vendor choice are explored in
the next section.

----------

   (24) The shrink-wrapped version of Netscape Navigator sold
within the United States and Canada supports several different
levels of encryption, including 40-bit RC4, 128-bit RC4,
56-bit DES, and triple-DES. The default for a domestic client
communicating with a domestic server is 128-bit RC4. Source:
Jeff Weinstein, Netscape Communications Corporation, Mountain
View, California, personal communication.

   (25) See Jason Pontin, "Microsoft Encryption API to Debut
in NT Workstation Beta," *Infoworld*, January 29, 1996, p. 25.

   (26) Note that development and support concerns are even
more significant when a given product is intended for
cross-platform use (i.e., for use in different computing
environments such as Windows, Mac OS, Unix, and so on), as is
the case for many high-end software products (such as database
retrieval systems): when a product is intended for use on 5O
different platforms, multiplying by a factor of two the effort
required on the part of the vendor entails much more of an
effort by the vendor than if the product were intended for use
on only one platform.

   (27) See footnote 17.

   (28) Note, however, that the use of object-oriented
software technology can in general facilitate the use of
applications programming interfaces that provide "hooks" to
modules of the user's choosing. A number of vendors have
developed or are developing general-purpose applications
programming interfaces that will allow the insertion of a
module to do almost anything. Since these programming
interfaces are not specialized for cryptography, but instead
enable many useful functions (e.g., file compression,
backups), it is very difficult to argue the basis on which
applications incorporating these interfaces should be denied
export licenses simply because they *could* be used to support
encryption.

   A further discussion of recent developments involving
cryptography modules and cryptographic applications
programming interfaces is contained in Chapter 7.

   (29) A similar conclusion was reached by the FBI, whose
testimony to the committee noted that "the use of export
controls may well have slowed the speed, proliferation, and
volume of encryption products sold un the U.S." Written
Statement of "FBI Input to the NRC's National Cryptographic
Study Committee," received December 1, 1995.

____________________________________________________________


                4.3.2 Regulatory Uncertainty
                 Related to Export Controls

   A critical factor that differentiates the costs of
complying with export controls from other costs of doing
business abroad is the unpredictability of the export control
licensing process. (Other dimensions of uncertainty for
vendors not related to export controls are discussed in
Chapter 6.) A company must face the possibility that despite
its best efforts, a USML export license or a commodity
jurisdiction to the CCL will not be granted for a product.
Uncertainties about the decisions that will emerge from the
export control regime force vendors into very conservative
planning scenarios. In estimating benefits and costs,
corporate planners must take into account the additional costs
that could be incurred in developing two largely independent
versions of the same product or limit the size of the
potential market to U.S. purchasers. When such planning
requirements are imposed, the number of product offerings
possible is necessarily reduced.

   USML licensing is particularly unpredictable, because the
reasons that a license is denied in any given instance are not
necessariiy made available to the applicant; in some cases,
the rationale for specific licensing decisions is based on
considerations that are highly classified and by law cannot be
made available to an uncleared applicant. Since such
rationales cannot be discussed openly, an atmosphere of
considerable uncertainty pervades the development process for
vendors seeking to develop products for overseas markets.
Furthermore, there is no independent adjudicating forum to
which a negative licensing decision can be appealed.

   Since USML licensing is undertaken on a case-by-case basis,
it requires the exercise of judgment on the part of the
regulatory authorities. A judgment-based approach has the
disadvantage that it requires a considerable degree of trust
between the regulated and the regulator.(30) To the extent
that an individual regulated party believes that the regulator
is acting in the best interests of the entire regulated
community, it is natural that it would be more willing to
accept the legitimacy of the process that led to a given
result. However, in instances in which those that are
regulated do not trust the regulator, judgments of the
regulator are much more likely to be seen as arbitrary and
capricious.(31)

   This situation currently characterizes the relationship
between cryptography vendors/users and national security
authorities responsible for implementing the U.S. export
control regime for cryptography. In input received by the
committee, virtually all industry representatives, from large
to small companies, testified about the unpredictability of
the process. From the vendor point of view, the resulting
uncertainty inhibits product development and allows negative
decisions on export to be rendered by unknown forces and/or
government agencies with neither explanation nor a reasonable
possibility of appeal.

   The need to stay far away from the vague boundaries of what
might or might not be acceptable is clearly an inhibitor of
technological progress and development. Vendor concerns are
exacerbated in those instances in which export control
authorities are unwilling to provide a specific reason for the
denial of an export license or any assurance that a similarly
but not identically configured product with encryption
capabilities would pass export review. Even worse from the
vendor perspective, product parameters are not the only
determinant of whether a licensing decision will be favorable
except in a very limited and narrow range of cryptographic
functionality.

   The uncertainty described above is not limited to new and
inexperienced vendors encountering the U.S. export control
regime for the first time; large and sophisticated
institutions with international connections have also
encountered difficulties with the current export control
regime. For example, a representative from a major U.S. bank
with many international branches reported that export controls
affect internally developed bank software with encryption
capabilities; a U.S. citizen who works on bank software with
encryption capabilities in England may "taint" that software
so that it falls under U.S. export control guidelines. Thus,
despite the fact that the current export control regime treats
banks and other financial institutions relatively liberally,
major banks have still struggled under the limitations of the
export control regime.

   The situation is worse for smaller companies. While large
companies have experience and legal staffs that help them to
cope with the export control regime, small companies do not.
New work on information technology often begins in garage-shop
operations, and the export control regime can be particularly
daunting to a firm with neither the legal expertise nor the
contacts to facilitate compliance of a product with all of the
appropriate regulations. These companies in particular are the
ones most likely to decide in the end to avoid entirely the
inclusion of cryptographic features due to concern of running
afoul of the export control rules.

   The following three examples illustrate how the
unpredictability of the export control licensing process has
affected U.S. vendors and their products.


Modularity

   As noted above, cryptographic applications programming
interfaces that are directly and easily accessible to the user
are in general subject to USML licensing. However, even
"closed" interfaces that are not easily accessible to the user
are sometimes perceived to pose a risk for the vendor. One
major product vendor reported to the committee that it was
reluctant to use modular development for fear that even an
internal module interface could keep a product from passing
export control review. Any software product that uses modular
techniques to separate the basic product functionality from
the cryptography has a well-defined interface between the two.
Even when the software product is converted to object code,
that interface is still present (though it is hidden from the
casual user). However, the interface cannot in general be
hidden from a person with strong technical skills, and such a
person would be able to find it and tamper with it in such a
way that a different cryptography module could be used.(32) A
number of similar considerations apply for hardware products,
in which the cryptographic capabilities might be provided by
a "plug-in" chip.

   The alternative to the use of modular techniques in the
development of integrated products would complicate the
"swap-in/swap-out" of cryptographic capabilities: lines of
code (if software) and wires (if hardware) that implemented
cryptographic capabilities would be highly interwoven with
lines of code and wires that implemented the primary
capabilities of the product. On the other hand, this approach
would be tantamount to the development of two largely distinct
products with little overlap in the work that was required to
produce them.

   The NSA has spoken publicly about its willingness to
discuss with vendors from the early stages of product design
features and capabilities of proposed products with encryption
capabilities for confidentiality so that the export license
approval process can be facilitated, and its willingness to
abide by nondisclosure agreements to reassure vendors that
their intellectual property rights will be protected.(33)
Nonetheless, the receipt of an export control license useful
for business purposes is not guaranteed by such cooperation.
For example, while decisions about commodity jurisdiction
often provide CCL jurisdiction for object code and USML
jurisdiction for source code (and thus need not inhibit
modular product development if the product is to be
distributed in object form only), the fact remains that such
decisions are part of a case-by-case review whose outcome is
uncertain. Different vendors are willing to tolerate different
levels of risk in this regard, depending on the magnitude of
the investments involved.

   As a general rule, NSA does not appear willing to make
agreements in advance that will assure licenses for a product
that has not yet been instantiated or produced. Such a
position is not unreasonable given NSA's stance toward
products with encryption capabilities in general, and the fact
that the true capabilities of a product may depend strongly on
how it is actually implemented in hardware or software. Thus,
vendors have no indemnification against the risk that a
product might not be approved.(34)


The Definition of Export

   There is uncertainty about what specific act constitutes
the "export" of software products with encryption
capabilities. It is reasonably clear that the act of mailing
to a foreign country a disk with a product with encryption
capabilities on it constitutes an export of that product. But
if that product is uploaded to an Internet site located in the
United States and is later downloaded by a user located in
another country, is the act of export theupload or the
download? What precautions must be taken by the uploader to
remain on the legal side of the ITAR?

   The committee has been unable to find any formal document
that indicates answers to these questions. However, a March
1994 letter from the State Department Office of Defense Trade
Controls appears to indicate that a party could permit the
posting of cryptographic software on an Internet host located
in the United States if "(a) the host system is configured so
that only people originating from nodes in the United States
and Canada can access the cryptographic software, or (b) if
the software is placed in a file or directory whose name
changes every few minutes, and the name of the file or
directory is displayed in a publicly known and readable file
containing an explicit notice that the software is for U.S.
and Canadian use only."(35) Of course, such a letter does not
provide formal guidance to parties other than the intended
addressee (indeed, under the ITAR, advisory opinions provided
to a specific party with a given set of circumstances are not
binding on the State Department even with respect to that
party), and so the issue remains murky.


The Speed of the Licensing Process

   Uncertainty is also generated by a lengthy licensing
process without time lines that allow vendors to make
realistic schedules. Box 4.10 describes some of the problems
reported to the committee. To summarize, the perceptions of
many vendors about the excessive length of time it takes to
obtain a license reflects the time required for discussions
with NSA about a product before an application is formally
submitted; the prospect of facing the export control process
deters some vendors entirely from creating certain products at
all. By contrast, NSA starts the clock only when it receives
a formal application, and in fact the usual time between
receipt of a formal application and rendering of a decision is
relatively short (a few weeks). The reason that such a fast
turnaround is possible is that by the time the application is
received, enough is known about the product involved that
processing is routine because there is no need for negotiation
about how the product must be changed for a license to be
approved.

   In response to some of these concerns, the U.S. government
has undertaken a number of reforms of the export control
regime (described in Section 4.1) to reduce the hassle and red
tape involved in obtaining export licenses.(36) These reforms
are important. Nevertheless, the pace at which new information
technology products develop and the increasing complexity of
those products will complicate product review efforts in the
future. Given relatively fixed staffing, these factors will
tend to increase the length of time needed to conduct product
reviews at a time when vendors are feeling pressures to
develop and market products more rapidly.

   One particular reform effort that deserves discussion is
the "personal use" exemption. For many years, Americans
traveling abroad were required under the ITAR to obtain
"temporary export licenses" for products with encryption
capabilities carried overseas for their personal use.(37) The
complexity of the procedure for obtaining such a license was
a considerable burden for U.S. businesspeople traveling
abroad, and these individuals were subject to significant
criminal penalties for an act that was widely recognized to be
harmless and well within the intent of the export control
regime.

   In February 1994, the Administration committed itself to
promulgating regulations to support a personal-use exemption
from the licensing requirement. Two years later, on February
16, 1996, the *Federal Register* contained a notice from the
Department of State, Bureau of Political Military Affairs,
announcing final rule of an amendment to the International
Traffic in Arms Regulation (ITAR) allowing U.S. persons to
temporarily export cryptographic products for personal use
without the need for an export license.(38)

   Some critics of government policy have objected to the
particular formulation of the record-keeping requirement. All
parties involved--including senior Administration
officials--have agreed that 2 years was far too long a period
for promulgation of so simple a rule.

----------

   (30) In contrast to a judgment-based approach, a
clarity-based approach would start from the premise that
regulations and laws should be as clear as possible, so that
a party that may be affected knows with a high degree of
certainty what is and is not permitted or proscribed. The
downside of a clarity-based approach is that affected parties
tend to go "right up to the line" of what is prohibited and
may seek ways to "design around" any stated limitations.
Furthermore, a clarity-based approach would require the
specification, in advance, of all acts that are prohibited,
even when it may not be possible to define in advance all acts
that would be undesirable.

   (31) For example, critics of the uncertainty engendered by
the export regime point out that uncertainty is helpful to
policy makers who wish to retain flexibility to modify policy
without the work or publicity required for a formal regulatory
change.

   (32) Of course, such considerations obviously apply to
software products with cryptographic capabilities that are
designed to be shipped in source code; not only can the
cryptographic module be easily identified and replaced, but it
can also be pulled out and adapted to other purposes. This
point was also raised in footnote 11 of this chapter.

   (33) For example, NSA representatives made comments to this
effect at the RSA Data Security Conference in San Francisco,
California, in January 1995.

   (34) Although other industries also have to deal with the
uncertainties of regulatory approval regarding products and
services, the export control process is particularly opaque,
because clear decisions and rationales for those decisions are
often not forthcoming (and indeed are often classified and/or
unrelated to the product per se).

   (35) Letter from Clyde Bryant, Office of Defense Trade
Controls, U.S. Department of State, Washington, D.C., to
Daniel Appelman, Heller, Ehrman, White & McAuliffe, dated
March 11, 1994.

   (36) For example, according to NSA, the detailing of an NSA
representative to work with the State Department Office of
Defense- Trade Controls has resulted in a considerable
reduction in the time needed to process a license.

   (37) For a description of how this process worked in
practice, see Matt Blaze, *My Life As an International Arms
Courier*, e-mail message circulated by Matt Blaze
(mab@research.att.com) on January 6, 1995. A news article
based on Blaze's story is contained in Peter H. Lewis,
"Between a Hacker and a Hard Place: DataSecurity Export Law
Puts Businesses in a Bind," *New York Times*, April 10, 1995,
p. D-1.

   (38) According to the regulation, the product must not be
intended for copying, demonstration, marketing, sale,
re-export, or transfer of ownership or control. It must remain
in the possession of the exporting person, which includes
being locked in a hotel room or safe. While in transit, it
must be with the person's accompanying baggage. Exports to
certain countries are prohibited--currently Cuba, Iran, Iraq,
Libya, North Korea, Sudan, and Syria. The exporter must
maintain records of each temporary export for 5 years. See
*Federal Register*, Volume 61(33), Friday, February 16, 1996,
Public Notice 2294, pp. 6111-6113.

____________________________________________________________


            4.3.3 The Size of the Affected Market
                      for Cryptography

   Since export controls on products with encryption
capabilities constrain certain aspects of sales abroad,
considerable public attention has focused on the size of the
market that may have been affected by export controls. Vendors
in particular raise the issue of market share with
considerable force:

   +    "The only effect of the export controls is to cause
economic harm to US software companies that are losing market
share in the global cryptography market to companies from the
many countries that do not have export controls."(39)

   +    "[The government's current policy on encryption] is
anti-competitive. The government's encryption export policy
jeopardizes the future of the software industry, one of the
fastest growing and most successful industries."(40)

   The size of the market for products with encryption
capabilities cuts across many dimensions of cryptography
policy, but since it is raised most often in the context of
the export control debate, it is addressed in this section.

   Plausible arguments can be made that the market ranges from
no more than the value of the security-specific products sold
annually (i.e., several hundred million dollars per year--a
low-end estimate)(41) to the total value of all hardware and
software products that might include encryption capabilities
(many tens of billions of dollars--a high-end estimate).(42)
The committee was unable to determine the size of the
information technology market directly affected by export
controls on encryption to within a factor of more than 100, a
range of uncertainty that renders any estimate of the market
quite difficult to use as the basis for a public policy
decision.

   Nevertheless, although it is not large enough to be
decisive in the policy debate, the floor of such estimates--a
few hundred million dollars per year--is not a trivial sum.
Furthermore, all trends point to growth in this number, growth
that may well be very large and nonlinear in the near future.
To the extent that both of these observations are valid, it is
only a matter of a relatively short time before even the floor
of any estimate will be quite significant in economic terms.

   The next three subsections describe some of the factors
that confound the narrowing of the large range of uncertainty
in any estimate of the size of the market affected by export
controls.


Defining a "Lost Sale"

   A number of vendors have pointed to specific instances of
lost sales as a measure of the harm done to the vendors as the
result of export controls on cryptography.(43) National
security officials believe that these figures are considerably
overstated. Administration officials and congressional staff
have expressed considerable frustration in pinning down a
reliable estimate of lost sales. It is important to begin with
the understanding that the concept of a "lost sale" is
intrinsically soft. Trying to define the term "lost sales"
raises a number of questions

   +     What events count as a sale lost because of export
restrictions? Several possibilities illustrate the
complications:

        -- A U.S. vendor is invited along with foreign vendors
        to bid on a foreign project that involves
        cryptography, but declines because the bid
        requirements are explicit and the U.S. vendor knows
        that the necessary export licenses will not be
        forthcoming on a time scale compatible with the
        project.

        -- A U.S. vendor is invited along with foreign vendors
        to bid on a foreign project that involves
        cryptography. In order to expedite export licensing,
        the U.S. vendor offers a bid that involves 40-bit
        encryption (thus ignoring the bid requirements), and
        the bid is rejected.

        -- A U.S. vendor is invited along with foreign vendors
        to bid on a foreign project that involves
        cryptography. A foreign vendor emerges as the winner.
        The sale is certainly a lost sale, but since customers
        often make decisions with a number of reasons in mind
        and may not inform losing vendors of their reasons, it
        is difficult to determine the relationship of export
        controls to the lost sale.

        -- No U.S. vendor is invited to bid on a foreign
        project that involves cryptography. In such an
        instance, the potential foreign customer may have
        avoided U.S. vendors, recognizing that the
        cryptography would subject the sale to U.S. export
        control scrutiny, possibly compromising sensitive
        information or delaying contract negotiations
        inordinately. On the other hand, the potential
        customer may have avoided U.S. vendors for other
        reasons, e.g., because the price of the U.S. product
        was too high.

   +     What part of a product's value is represented by the
cryptographic functionality that limits a product's sales when
export controls apply? As noted in Chapter 2, standalone
products with encryption capabilities are qualitatively
different from general-purpose products integrated with
encryption capabilities. A security-specific stand-alone
product provides no other functionality, and so the value of
the cryptography is the entire cost of the product. But such
sales account for a very small fraction of information
technology sales. Most sales of information technology
products with encryption capabilities are integrated products.
Many word processing and spreadsheet programs may have
encryption capabilities, but users do not purchase such
programs for those capabilities -- they purchase them to
enhance their ability to work with text and numbers.
Integrated products intended for use in networked environments
(e.g., "groupware") may well have encryption capability, but
such products are purchased primarily to serve collaboration
needs rather than encryption functions. In these instances, it
is the cost of the entire integrated product (which may not be
exportable if encryption is a necessary but secondary feature)
that counts as the value lost.

   +     How does a vendor discover a "lost sale"? In some
cases, a specific rejection counts as evidence. But in general
there is no systematic way to collect reliable data on the
number or value of lost sales.

   +     An often-unnoticed dimension of "lost sales" does not
involve product sales at all, but rather services whose
delivery may depend on cryptographic protection. For example,
a number of U.S. on-line service providers (e.g., America
Online, Compuserve, Prodigy) are intending to offer or expand
access abroad;(44) the same is true for U.S. providers of
telecommunications services.(45) To the extent that
maintaining the security of foreign interactions with these
service providers depends on the use of strong cryptography,
the ability of these companies to provide these services may
be compromised by export restrictions and thus sales of
service potentially reduced.


Latent vs. Actual Demand

   In considering the size of the market for cryptography, it
is important to distinguish between "actual" demand and
"latent" demand.

   +     Actual demand reflects what users spend on products
with encryption capabilities. While the value of "the market
for cryptography" is relatively well defined in the case of
stand-alone security-specific products (it is simply the value
of all of the sales of such products), it is not well defined
when integrated products with encryption capabilities are
involved. The reason is that for such products, there is no
demand for cryptography per se. Rather, users have a need for
products that do useful things; cryptography is a feature
added by designers to protect users from outside threats to
their work, but as a purely defensive capability, cryptography
does not so much add functional value for the user as protect
against reductions in the value that the user sees in the
product. Lotus Notes, for example, would not be a viable
product in the communications software market without its
encryption capabilities, but users buy it for the group
collaboration capabilities that it provides rather than for
the encryption per se.

   +     Latent demand (i.e., inherent demand that users do
not realize or wish to acknowledge but that surfaces when a
product satisfying this demand appears on the market) is even
harder to measure or assess. Recent examples include Internet
usage and faxes; in these instances, the underlying technology
has been available for many years, but only recently have
large numbers of people been able to apply these technologies
for useful purposes. Lower prices and increasing ease of use,
prompted in part by greater demand, have stimulated even more
demand. To the extent that there is a latent demand for
cryptography, the inclusion of cryptographic features into
integrated products might well stimulate a demand for
cryptography that grows out of knowledge and practice, out of
learning by doing.

   Determining the extent of latent demand is complicated
greatly by the fact that latent demand can be converted into
actual demand on a relatively short time scale. Indeed, such
growth curves -- very slow growth in use for a while and then
a sudden explosion of demand -- characterize many critical
mass phenomena: some information technologies (e.g., networks,
faxes, telephones) are valuable only if some critical mass of
people use them. Once that critical mass is reached, other
people begin to use those technologies, and demand takes off.
Linear extrapolations 5 or 10 years into the future based on
5 or 10 years in the past miss this very nonlinear effect.

   Of course, it is difficult to predict a surge in demand
before it actually occurs. In the case of cryptography, market
analysts have been predicting significantly higher demand for
many years; today, growth rates are high, but demand for
information security products including cryptography is not
yet ubiquitous.

   Two important considerations bearing directly on demand are
increasing system complexity and the need for
interoperability. Users must be able to count on a high degree
of interoperability in the systems and software they purchase
if they are to operate smoothly across national boundaries (as
described in Chapter 1). Users understand that it is more
difficult to make different products interoperate, even if
they are provided by the same vendor, than to use a single
product. For example, the complexity of a product generally
rises as a function of the number of products with which it
must interoperate, because a new product must interoperate
with already-deployed products. Increased complexity almost
always increases vulnerabilities in the system or network that
connects those products. In addition, more complex products
tend to be more difficult to use and require greater technical
skill to maintain and manage; thus, purchasers tend to shy
away from such products. This reluctance, in turn, dampens
demand, even if the underlying need is still present.

   From the supply side, vendors feel considerable pressure
from users to develop interoperable products. But greater
technical skills are needed by vendors to ensure
interoperability among different product versions than to
design a single product that will be used universally, just as
they are for users involved in operation and maintenance of
these products. Requirements for higher degrees of technical
skill translate into smaller talent pools from which vendors
can draw and thus fewer products available that can meet
purchasers' needs for interoperability.

   Problems relating to interoperability and system
complexity, as well as the size of the installed base, have
contributed to the slow pace of demand to date for products
with encryption capabilities.

   Nevertheless, the committee believes it is only a matter of
time until a surge occurs, at the same time acknowledging the
similarity between this prediction and other previous
predictions regarding demand. This belief is based on
projections regarding the growth of networked applications(46)
and the trends discussed in Chapter 1--increasing demand for
all kinds of information technology, increasing geographic
dispersion of businesses across international boundaries,
increasing diversity of parties wishing/needing to communicate
with each other, and increasing diversity in information
technology applications and uses in all activities of a
business. Further, the committee believes that computer users
the world over have approximately the same computing needs as
domestic users, and so domestic trends in computing (including
demand for more information security) will be reflected
abroad, though perhaps later (probably years later but not
decades later).


Market Development

   A third issue in assessing the size of the market for
cryptography is the extent to which judgments should be made
on the basis of today's market conditions (which are known
with a higher certainty) rather than markets that may be at
risk tomorrow (which are known with a much lower degree of
certainty).

   The market for certain types of software tends to develop
in a characteristic manner. In particular, the long-term
success of infrastructure software (i.e., software that
supports fundamental business operations such as operating
systems or groupware) depends strongly on the product's market
timing; once such software is integrated into the
infrastructure of the installing organization, demands for
backward-compatibility make it difficult for the organization
to install any alternative.(47) In other words, an existing
software infrastructure inhibits technological change even if
better software might be available. It is for this reason that
in some software markets, major advantages accrue to the first
provider of a reasonable product.

   These pressures complicate life for government policy
makers who would naturally prefer a more deliberate approach
to policy making, because it is only during a small window of
time that their decisions are relevant--the sooner they act,
the better. The longer they wait, the higher will be the
percentage of companies that have already made their
technology choices, and these companies will face large
changeover costs if policy decisions entail incompatible
alternatives to their currently deployed infrastructure. If
the initial choices of companies involve putting non-U.S.
software in place, U.S. vendors fear that they will have lost
huge future market opportunities.(48)

----------

   (39) Jim Hassert, *Washington Connections*, Software
Publishers Association, Washington, D.C., Chapter 9. Available
on-line at http://www.spa.org.

   (40) Business Software Alliance, *Information and Data
Security: The Encryption Update.* Available on-line from
http://www.bsa.org.

   (41)  U.S. Department of Commerce and National Security
Agency, *A Study of the International Market for Computer
Software with Encryption*, prepared for the Interagency
Working Group on Encryption and Telecommunications Policy,
Office of the Secretary of Commerce, January 11, 1996, p.
III-I. Note, however, that this report does not arrive at this
estimate independently; rather, it cites other estimates made
in the private sector.

   (42)  Of course, it is a matter of speculation what
fraction of the information technology market (on the order of
$193 billion in 1993; see below) might usefully possess
encryption capabilities; good arguments can made to suggest
that this fraction is very small or very large. A number of
information technology trade organizations have also made
estimates. The Software Publishers Association cited a survey
by the National Computer Security Association that quoted a
figure of $160 million in aggregate known losses in 1993
because of export controls; see "Written Testimony of the
Software Publishers Association to the National Research
Council," Washington, D.C., July 19, 1995. In 1993, the
Business Software Alliance estimated that "approximately $6-9
billion in U.S. company revenues are currently at risk because
of the inability of those companies to be able to sell world
wide generally available software with encryption capabilities
employing DES or other comparable strength algorithms;" see
Testimony of Ray Ozzie, president, Iris Associates, on behalf
of the Business Software Alliance, "The Impact on America's
Software Industry of Current U.S. Government Munitions Export
Controls," before the Economic Policy, Trade and Environment
Subcommittee, House Committee on Foreign Affairs, Washington,
D.C., October 12, 1993. The Computer Systems Policy Project
(CSPP) estimated that in 2000, the potential annual revenue
exposure for U.S. information technology vendors would range
from $3 billion to $6 billion on sales of cryptographic
products, including both hardware and software; CSPP also
estimated $30 billion to 60 billion in potential revenue
exposure on sales of associated computer systems; see The
Computer Systems Policy Project, William F. Hagerty IV, The
Management Advisory Group, *The Growing Need for Cryptography:
The Impact of Export Control Policy on U.S. Competitiveness*,
Study Highlights (viewgraphs), Bethesda, Maryland, December
15, 1995.

   The $193 billion figure is taken from Department of
Commerce, *U.S. Industrial Outlook 1994*, and includes
computers and peripherals ($62.5 billion, p. 26-1), packaged
software ($32.0 billion, p. 27-1), information services ($13.6
billion, p. 25-1), data processing and network services ($46.4
billion, p. 25-1), and systems integration/custom programming
services ($38.7 billion, p. 25-5). Note that this figure does
not include some other industry sectors that could, in
principle, be affected by regulations regarding secure
communications; in 1993, U.S. companies provided
telecommunications services valued at $10.4 billion to foreign
nations (p. 29-1) and shipped $17.5 billion (1987 dollars) in
telephone equipment worldwide (p. 30-3).

   (43)  For example, in a presentation to the committee on
July 19, 1995, the Software Publishers' Association documented
several specific instances in which a U.S. company had lost a
sale of a product involving cryptography to a foreign firm.
These instances included a company that lost one-third of its
total revenues because export controls on DES-based encryption
prevented sales to a foreign firm; a company that could not
sell products with encryption capability to a European company
because that company re-sold products to clients other than
financial institutions; a U.S. company whose European division
estimated at 50 percent the loss of its business among
European financial institutions, defense industries,
telecommunications companies, and government agencies because
of inadequate key sizes; and a U.S. company that lost the sale
of a DESbased system to a foreign company with a U.S.
subsidiary. Sofware Publishers' Association, "Presentation on
Impacts of Export Control on Encryption before the NRC
National Cryptography Policy Committee," July 19, 1995 .

   (44) See for example, Kara Swisher, "Old World, New
Frontier in Cyberspace," *Washington Post*, December 12, 1995,
p. C-1; Victoria Shannon, "U.S. On-Line Services Fall Short on
International Reach," *Washington Post*, April 3, 1995,
Washington Business, p. 20. For more detail on AOL plans, see
Elizabeth Cocoran, "America Online to Offer Access in Europe,"
*Washington Post*, May 19, 1995, p. F-3.

   (45) See for example, U.S. Congress, Office of Technology
Assessment, *U.S. Telecommunications Services in European
Markets*, OTA-TCT-548, U S. Government Printing Office,
Washington, D.C., August 1993.

   (46) For example, a survey by the International Data
Corporation indicated that the installed base of users for
work-group applications (involving communications among
physically separated users) is expected to grow at a rate of
about 74 percent annually between 1993 and 1998. See Ann
Palermo and Darby Johnson, Analysts, International Data
Corporation, *Workgroup ,Applications Software: Market Review
and Forecast, 1993-1998*, Framingham, Massachusetts, (date).
It is true that a considerable amount of remote collaboration
is done via e-mail without cryptographic protection, but
work-group applications provide much higher degrees of
functionality for collaboration because they are specifically
designed for that purpose. As these applications become more
sophisticated (e.g., as they begin to process large assemblies
of entire documents rather than the short messages for which
e-mail is best suited), the demand for higher degrees of
protection is likely to increase.

   (47) Many products require backward-compatibility for
marketplace acceptance. Demands for backward-compatibility
even affect products intended for operation in a stand-alone
environment -- an institution with 2 million spreadsheet files
is unlikely to be willing to switch to a product that is
incompatible with that existing database unless the product
provides reasonable translation facilities for migrating to
the new product. Network components are even harder to change,
because stations on a network must interoperate. For example,
most corporate networks have servers deployed with
workstations that communicate with those servers. Any change
to the software for the servers must not render it impossible
for those workstations to work smoothly with the upgrade.

   (48) The deployment of Lotus Notes provides a good example.
Lotus marketing data suggests fairly consistently that once
Notes achieves a penetration of about 200 users in a given
company, an explosion of demand follows, and growth occurs
until Notes is deployed company-wide.

____________________________________________________________


       4.3.4 Inhibiting Vendor Responses to User Needs

   In today's marketing environment, volume sales (licensing)
to large corporate or government customers, rather than
purchases by individuals, tend to drive sales of business
software products.(49) Since corporate customers have large
leverage in the marketplace (because one purchasing decision
can result in thousands of product sales to a single
corporation), major software vendors are much more responsive
to the needs of corporate users. Of particular relevance to
the export control debate are three perceptions of corporate
users:

   +     Corporate users do not see that different levels of
encryption strength (as indicated, for example, by the key
length of foreign and domestic versions of a product) provide
differential advantages. Put differently, the market reality
is that users perceive domestic-strength versions as the
standard and liberally exportable versions of cryptography as
weak, rather than seeing liberally exportable versions of
cryptography as the standard and domestic-strength versions as
stronger.

   +     Corporate users weigh all features of a product in
deciding whether or not to buy it. Thus, the absence of a
feature such as strong encryption that is desired but not
easily available because of U.S. export controls counts as a
distinct disadvantage for a U.S. product. Although other
features may help to compensate for this deficiency, the
deficiency may pose enough of a barrier a product's acceptance
abroad that sales are significantly reduced.

   +     Corporate users see cryptographic strength as an
important parameter in their assessments of the information
security that products offer. It is true that cryptography is
only one dimension of information security, that export
controls do not affect certain approaches to increasing
overall information security, and that vendors often do not
address these other approaches. But cryptography is a visible
aspect of the information security problem, and vendors feel
an obligation to respond to market perceptions even if these
perceptions may not be fully justified by an underlying
technical reality. Moreover, many of the information security
measures that do not involve export controls are more
difficult and costly than cryptography to implement, and so it
is natural for vendors to focus their concerns on export
controls on cryptography.

   U.S. vendors that are unable to respond in a satisfactory
manner to these perceptions have a natural disadvantage in
competing against vendors that are able to respond.


----------

   (49) The Department of Commerce noted that "civil use of
software-based encryption will significantly increase in the
next five years, with corporate customers dominating this new
marketplace." See U.S. Department of Commerce and National
Security Agency, *A Study of the International Market for
Computer Software with Encryption*, prepared for the
Interagency Working Group on Encryption and Telecommunications
Policy, Office of the Secretary of Commerce, January 11, 1996,
p. 111-2.


____________________________________________________________


              4.4 THE IMPACT OF EXPORT CONTROLS
      ON U.S. ECONOMIC AND NATIONAL SECURITY INTERESTS

   By affecting U.S. industries abroad that might use
cryptography to protect their information interests and U.S.
vendors of a critical technology (namely, information
technology), export controls have a number of potentially
negative effects on national security that policy makers must
weigh against the positive effects of reducing the use of cry
ptography by hostile parties.


        4.4.1 Direct Economic Harm to U.S. Businesses

   While acknowledging economic benefits to U.S. business from
signals intelligence (as described in Chapter 3), the
committee notes that protection of the information interests
of U.S. industries is also a dimension of national security,
especially when the threats emanate from foreign sources.

   If the potential value of proprietary information is
factored into the debate over export controls, it dominates
all other figures of merit. A figure of $280 billion to $560
billion was placed by the Computer Systems Policy Project on
the value of future revenue opportunities as the result of
electronic distribution and commerce and future opportunities
to reengineer business processes by 2000.(50) Opponents of
export controls on cryptography argue that if electronic
channels and information systems are perceived to be
vulnerable, businesses may well be discouraged from exploiting
these opportunities, thereby placing enormous potential
revenues at risk.

   On the other hand, it is essentially impossible to
ascertain with any degree of confidence what fraction of
proprietary information would be at risk in any practical
sense if businesses did move to exploit these opportunities.
Current estimates of industrial and economic espionage provide
little guidance. The most authoritative publication on the
subject to date, the *Annual Report to Congress on Foreign
Economic Collection and Industrial Espionage*,(51) noted that

   [i]n today's world in which a country's power and stature
   are often measured by its economic/industrial capability,
   foreign government ministries--such as those dealing with
   finance and trade--and major industrial sectors are
   increasingly looked upon to play a more prominent role in
   their respective country's collection efforts.... An
   economic competitor steals a US company's proprietary
   business information or government trade strategies, [and]
   foreign companies and commercially oriented government
   ministries are the main beneficiaries of US economic
   information. The aggregate losses that can mount as a
   result of such efforts can reach billions of dollars per
   year, constituting a serious national security concern.

   The report went on to say that "[t]here is no formal
mechanism for determining the full qualitative and
quantitative scope and impact of the loss of this targeted
information. Industry victims have reported the loss of
hundreds of millions of dollars, lost jobs, and lost market
share." Thus, even this report, backed by all of the
counterintelligence efforts of the U.S. government, is unable
to render a definitive estimate to within an order of
magnitude. Of course, it may well be that these estimates of
loss are low, because companies are reluctant to publicize
occurrences of foreign economic and industrial espionage as
such publicity can adversely affect stock values, customers'
confidence, and ultimately competitiveness and market share,
or also because clandestine theft of information may not be
detected. Furthermore, because all business trends point to
greater volumes of electronically stored and communicated
information in the future, it is clear that the potential for
information compromises will grow--the value of information
that could be compromised through electronic channels is only
going to increase.

----------

   (50) William F. Hagerty IV, The Management Advisory Group,
Computer Systems Policy Project, *The Growing Need for
Cryptography: The Impact of Export Control Policy on U.S
Competitiveness*, Study Highlights (viewgraphs), Bethesda,
Maryland, December 15, 1995.

   (51)  National Counterintelligence Center, *Annual Report
to Congress on Foreign Economic Collection and Industrial
Espionage*, Washington, D.C., July 1995.

____________________________________________________________


               4.4.2 Damage to U.S. Leadership
                  in Information Technology

   The strength of the U.S. information technology industry
has been taken as a given for the past few decades. But as
knowledge and capital essential to the creation of a strong
information technology industry become more available around
the world, such strength can no longer be taken for
granted.(52) If and when foreign products become widely
deployed and well integrated into the computing and
communications infrastructure of foreign nations, even better
versions of U.S. products will be unable to achieve
significant market penetration. One example of such a
phenomenon may be the growing interest in the United States in
personal communications systems based on GSM, the European
standard for digital cellular voice communications. Further,
as the example of Microsoft vis-a-vis IBM in the 1980s
demonstrated, industry dominance once lost is quite difficult
to recover in rapidly changing fields.

   The development of foreign competitors in the information
technology industry could have a number of disadvantageous
consequences from the standpoint of U.S. national security
interests:

   +     Foreign vendors, by assumption, will be more
responsive to their own national governments than to the U.S.
government. To the extent that foreign governments pursue
objectives involving cryptography that are different from
those of the United States, U.S. interests may be adversely
affected. Specifically, foreign vendors could be influenced by
their governments to offer for sale to U.S. firms products
with weak or poorly implemented cryptography. If these vendors
were to gain significant market share, the information
security of U.S. firms could be adversely affected.
Furthermore, the United States is likely to have less
influence and control over shipments of products with
encryption capabilities between foreign nations than it has
over similar U.S. products that might be shipped abroad;
indeed, many foreign nations are perfectly willing to ship
products (e.g., missile parts, nuclear reactor technology) to
certain nations in contravention to U.S. or even their own
interests. In the long run, the United States may have even
less control over the products with encryption capabilities
that wind up on the market than it would have if it
promulgated a more moderate export control regime.

   +     Detailed information about the workings of foreign
products with encryption capabilities is much less likely to
be available to the U.S. government than comparable
information about similar U.S. products that are exported.
Indeed, as part of the export control administration process,
U.S. products with encryption capabilities intended for export
are examined thoroughly by the U.S. government; as a result,
large amounts of information about U.S. products with
encryption capabilities are available to it.(53)

   Export controls on cryptography are not the only factor
influencing the future position of U.S. information technology
vendors in the world market. Yet, the committee believes that
these controls do pose a risk to their future position that
cannot be ignored, and that relaxation of controls will help
to ensure that U.S. vendors are able to compete with foreign
vendors on a more equal footing.

----------

   (52)  Obviously, it is impossible to predict with certainty
whether export controls will stimulate the growth of
significant foreign competition for U.S. information
technology vendors. But the historical evidence suggests some
reason for concern. For example, a 1991 report (National
Research Council, *Finding Common Ground: U.S. Export Controls
in a Changed Global Environment*, National Academy Press,
1991) found that "unilateral embargoes on exports [of
technologies for commercial aircraft and jet engines] to
numerous countries not only make sales impossible but actually
encourage foreign competitors to develop relationships with
the airlines of the embargoed countries. By the time the U.S.
controls are lifted, those foreign competitors may have
established a competitive advantage" (page 22). The same
report also found that for computer technology, "marginal
supplier disadvantages can lead to significant losses in
market position, and it is just such marginal disadvantages
that can be introduced by export controls" (page 23). An
earlier study (Charles Ferguson, "High Technology Product Life
Cycles, Export Controls, and International Markets," in
*Working Papers* of the National Research Council report
*Balancing the National Interest, U.S. National Security
Export Controls and Global Economic Competition*, National
Academy Press, 1987), pointed out that the emergence of strong
foreign competition in a number of high-technology areas
appeared in close temporal proximity to the enforcement of
strong export controls in these areas for U.S. vendors. While
the correlation does not prove that export controls
necessarily influenced or stimulated the growth of foreign
competition, the history suggests that they may have had some
causal relationship. In the financial arena (not subject to
export controls), U.S. financial controls associated with the
Trading-with-the-Enemy Act may have led to the rise of the
Eurodollar market, a set of foreign financial institutions,
markets, and instruments that eroded the monopoly held on
dollar-denominated instruments and dollar-dominated
institutions by U.S. firms.

   The likelihood of foreign competition being stimulated for
cryptography may be larger than suggested by some of these
examples, because at least in the software domain, product
development and distribution are less capital-intensive than
in traditional manufacturing industries; lower capital
intensity would mean that competitors would be more likely to
emerge.

   Finally, while it is true that some foreign nations also
impose export controls on cryptography, those controls tend to
be less stringent than those of the United States as discussed
in Appendix G. In particular, it is more difficult to export
encryption from the United States to the United Kingdom than
the reverse, and the U.S. market is an important market for
foreign vendors. Further, it takes only one nation with weak
or nonexistent controls to spawn a competitor in an industry
such as software.

   (53) For example, U.S. vendors are more likely than foreign
vendors to reveal source code of a program to the U.S.
government (for purposes of obtaining export licenses). While
it is true that the object code of a software product can be
decompiled, decompiled object code is always much more
difficult to understand than the original source code that
corresponds to it.

_____________________________________________________________


         4.5 THE MISMATCH BETWEEN THE PERCEPTIONS OF
      GOVERNMENT/NATIONAL SECURITY AND THOSE OF VENDORS


   As the committee proceeded in its study, it observed what
can only be called a disconnect between the perceptions of the
national security authorities that administer the export
control regulations on cryptography and the vendors that are
affected by it. This disconnect was apparent in a number of
areas:

   +     National security authorities asserted that export
controls did not injure the interests of U.S. vendors in the
foreign sales of products with encryption capabilities. U.S.
vendors asserted that export controls had a significant
negative effect on their foreign sales.

   +     National security authorities asserted that nearly
all export license applications for a product with encryption
capabilities are approved. Vendors told the committee that
they refrained from submitting products for approval because
they had been told on the basis of preliminary discussions
that their products would not be approved for export.

   +     National security authorities presented data showing
that the turnaround time for license decisions had been
dramatically shortened (to a matter of days or a few weeks at
most). Vendors noted that these data took into account only
the time from the date of formal submission of an application
to the date of decision, and did not take into account the
much greater length of time required to negotiate product
changes that would be necessary to receive approval. (See
Section 4.3.2 for more discussion.)

   +     National security authorities asserted that they
wished to promote good information security for U.S.
companies, pointing out the current practice described in
Section 4.1.2 that presumes the granting of USML licenses for
stronger cryptography to U.S.-controlled companies and banking
and financial institutions. Vendors pointed to actions taken
by these authorities to weaken the cryptographic security
available for use abroad, even in business ventures in which
U.S. firms had substantial interests. Potential users often
told the committee that even under presumptive approval,
licenses were not forthcoming, and that for practical
purposes, these noncodified categories were not useful.

   +     National security authorities asserted that they took
into account foreign competition and the supply of products
with encryption capabilities when making decisions on export
licenses for U.S products with encryption capabilities.
Vendors repeatedly pointed to a substantial supply of foreign
products with encryption capabilities.

   +     National security authorities asserted that they
wished to maintain the worldwide strength and position of the
U.S. information technology industry. Vendors argued that when
they are prevented from exploiting their strengths--such as
being the first to develop integrated products with strong
encryption capabilities -- their advantages are in fact being
eroded.

   The committee believes that to some extent, these
differences can be explained as the result of rhetoric by
parties intending to score points in a political debate. But
the differences are not merely superficial; they reflect
significantly different institutional perspectives. For
example, when national security authorities "take into account
foreign supplies of cryptography," they focus naturally on
what is available at the time the decision is being made. On
the other hand, vendors are naturally concerned about
incorporating features that will give their products a
competitive edge, even if no exactly comparable foreign
products with cryptography are available at the moment. Thus,
different parties focus on different areas of
concern--national security authorities on the capabilities
available today, and vendors on the capabilities that might
well be available tomorrow.

   NSA perceptions of vendors and users of cryptography may
well be clouded by an unwillingness to speak publicly about
the full extent of vendor and user unhappiness with the
current state of affairs. National security authorities
asserted that their working relationships with vendors of
products with encryption capabilities are relatively
harmonious. Vendors contended that since they are effectively
at the mercy of the export control regulators, they have
considerable incentive to suppress any public expression of
dissatisfaction with the current process. A lack (or small
degree) of vendor outcry against the cryptography export
control regime cannot be taken as vendor support for it. More
specifically, the committee received input from a number of
private firms on the explicit condition of confidentiality.
For example:

   +     Companies with interests in cryptography affected by
export control were reluctant to express fully their
dissatisfaction with the current rules governing export of
products with encryption capabilities or how these rules were
actually implemented in practice. They were concerned that any
explicit connection between critical comments and their
company might result in unfavorable treatment of a future
application for an export license for one of their products.

   + Companies that had significant dealings with the
Department of Defense were reluctant to express fully their
unhappiness with policy that strongly promoted classified
encryption algorithms and government-controlled key-escrow
schemes. These companies were concerned that expressing their
unhappiness fully might result in unfavorable treatment in
competinG for future DOD business.

   Many companies have expressed dissatisfaction publicly,
although a very small number of firms did express to the
committee their relative comfort with the way in which the
current export control regime is managed. The committee did
not conduct a systematic survey of all firms affected by
export regulations, and it is impossible to infer the position
of a company that has not provided input on the matter.(54)

----------

   (54) The Department of Commerce study is the most
systematic attempt to date to solicit vendors' input on how
they have been affected by export controls, and the
solicitation received a much smaller response than expected.
See U.S. Department of Commerce and National Security Agency,
*A Study of the International Market for Computer Software
with Encryption*, prepared for the Interagency Working Group
on Encryption and Telecommunications Policy, Office of the
Secretary of Commerce, January 11, 1996.

____________________________________________________________


                4.6 EXPORT OF TECHNICAL DATA


   The rules regarding "technical data" are particularly
difficult to understand. A cryptographic algorithm (if
described in a manner that is not machine-executable) is
counted as technical data, whereas the same algorithm if
described in machine-readable form (i.e., source or object
code) counts as a product. Legally, the ITAR regulate products
with encryption capabilities differently than technical data
related to cryptography, although the differences are
relatively small in nature. For example, technical data
related to cryptography enjoys an explicit exemption when
distributed to U.S.-controlled foreign companies, whereas
products with encryption capabilities are in principle subject
to a case by-case review in such instances (although in
practice, licenses for products with encryption capabilities
under such circumstances are routinely granted).

   Private citizens and academic institutions and vendors are
often unclear about the legality of actions such as:

   +     Discussing cryptography with a foreign citizen in the
room;

   +     Giving away software with encryption capabilities
over the Internet (see Section 4.8);

   +     Shipping products with encryption capabilities to a
foreign company within the United States that is controlled
but not owned by a U.S. company;

   +     Selling a U.S. company that makes products with
strong encryption capabilities to a foreign company;

   +     Selling products with encryption capabilities to
foreign citizens on U.S. soil;

   +     Teaching a course on cryptography that involves
foreign graduate students;

   +     Allowing foreign citizens residing in the United
States to work on the source code of a product that uses
embedded cryptography.(55)

   Box 4.11 provides excerpts from the only document known to
the committee that describes the U.S. government explanation
of the regulations on technical data related to cryptography.
In practice, these and other similar issues regarding
technical data do not generally pose problems because these
laws are for the most part difficult to enforce and in
fact are not generally enforced. Nevertheless, the vagueness
and broad nature of the regulations may well put people in
jeopardy and unknowingly.(56)

----------

   (55) For example, one vendor argues that because foreign
citizens hired by U.S. companies bring noncontrolled knowledge
back to their home countries anyway, the export control
regulations on technical data make little sense as a technique
for limiting the spread of knowledge. In addition, other
vendors note that in practice the export control regulations
on technical data have a much more severe impact on the
employees that they may hire than on academia, which is
protected at least to some extent by presumptions of academic
freedom

   (56) A suit filed in February 1995 seeks to bar the
government from restricting publication of cryptographic
documents and software through the use of the export control
laws. The plaintiff in the suit is Dan Bernstein, a graduate
student in mathematics at the University of California at
Berkeley. Bernstein developed an encryption algorithm that he
wishes to publish and to implement in a computer program
intended for distribution, and he wants to discuss the
algorithm and program at open, public meetings. Under the
current export control laws, any individual or company that
exports unlicensed encryption software may be in violation of
the export control laws that forbid the unlicensed export of
defense articles, and any individual that discusses the
mathematics of cryptographic algorithms may be in violation of
the export control laws that forbid the unlicensed export of
"technical data." The lawsuit argues that the export control
scheme as applied to encryption software is an "impermissible
prior restraint on speech, in violation of the First
Amendment" and that the current export control laws are vague
and overbroad in denying people the right to speak about and
publish information about cryptography freely. A decision by
the Northern District Court of California on April 15, 1996,
by Judge Marilyn Patel, denied the government's motion to
dismiss this suit, and found that for the purposes of First
Amendment analysis, source code should be treated as speech.
The outcome of this suit is unknown as the time of this
writing (spring 1996). The full text of this decision and
other related documents can be found at
http://www.eff.org/pub/Legal/Cases/Bernstein_v_DoS/Legal/.

   The constitutionality of export controls on technical data
has not been determined by the U.S. Supreme Court. A ruling by
the U.S. Ninth Circuit Court of Appeals held that the ITAR,
when construed as "prohibiting only the exportation of
technical data significantly and directly related to specific
articles on the Munitions List, do not interfere with
constitutionally protected speech, are not overbroad and the
licensing provisions of the Act are not an unconstitutional
prior restraint on speech." (See 579 F.2d 516, U.S. vs. Edler,
United States Court of Appeals, Ninth Circuit, July 31, 1978.)
Another suit filed by Philip Karn directly challenging the
constitutionality of the ITAR was dismissed by the U.S.
District Court for the District of Columbia on March 22, 1996.
(The issue at hand was the fact that Karn had been denied CCL
jurisdiction for a set of floppy diskettes containing source
code for cryptographic confidentiality identical to that
contained in Schneier's book (which had received CCL
jurisdiction). See http://www.qualcomm.com/people/
pkarn/export/index.html for the running story (Karn is
appealing this decision); this Web page also contains the
District Court's opinion on this lawsuit.) Some scholars argue
to the contrary that export controls on technical data may
indeed present First Amendment problems, especially if these
controls are construed in such a way that they inhibit
academic discussions of cryptography with foreign nationals or
prevent academic conferences on cryptography held in the
United States from inviting foreign nationals. See, for
example, Allen M. Shinn, Jr., "First Amendment and Export
Laws: Free Speech on Scientific and Technical Matters," *The
George Washington Law Review*, January 1990, pp. 368-403, and
Kenneth J. Pierce, "Public Cryptography, Arms Export Controls,
and the First Amendment: A Need for Legislation," *Cornell
International Law Journal*, Volume 17(19), pp. 197-237.

____________________________________________________________


              4.7 FOREIGN POLICY CONSIDERATIONS


   A common perception within the vendor community is that the
National Security Agency is the sole "power behind the scenes"
for enforcing the export control regime for cryptography.
While NSA is indeed responsible for making judgments about the
national security impact of exporting products with encryption
capabilities, it is by no means the only player in the export
license application process.

   The Department of State plays a role in the export control
process that is quite important. For example, makers of
foreign policy in the U.S. government use economic sanctions
as a tool for expressing U.S. concern and displeasure with the
actions of other nations; such sanctions most often involve
trade embargoes of various types. Violations of human rights
by a particular nation, for example, represent a common issue
that can trigger a move for sanctions. Such sanctions are
sometimes based on presidential determinations (e.g., that the
human rights record of country X is not acceptable to the
United States) undertaken in accordance with law; in other
cases, sanctions against specific nations are determined
directly by congressional legislation; in still other cases,
sanctions are based entirely on the discretionary authority of
the President.

   The imposition of sanctions is often the result of
congressional action that drastically limits the discretionary
authority of the State Department. In such a context, U.S.
munitions or articles of war destined for particular offending
nations (or to the companies in such nations) are the most
politically sensitive, and in practice the items on the USML
are the ones most likely to be denied to the offending
nations. In all such cases, the State Department must
determine whether a particular item on the USML should or
should not qualify for a USML license. A specific example of
such an action given to the committee in testimony involved
the export of cryptography by a U.S. bank for use in a branch
located in the People's Republic of China. Because of China's
human rights record, the Department of State delayed the
export, and the contract was lost to a Swiss firm. The sale of
cryptographic tools that are intended to protect the interests
of a U.S. company operating in a foreign nation was subject to
a foreign policy stance that regarded such a sale as
equivalent to supplying munitions to that nation.

   Thus, even when NSA has been willing to grant an export
license for a given cryptography product, the State Department
has sometimes denied a license because cryptography is on the
USML. In such cases, NSA takes the blame for a negative
decision, even when it had nothing to do with it.

   Critics of the present export control regime have made the
argument that cryptography, as an item on the USML that is
truly dual-use, should not necessarily be included in such
sanctions. Such an argument has some intellectual merit, but
under current regulations it is impossible to separate
cryptography from the other items on the USML.


              4.8 TECHNOLOGY-POLICY MISMATCHES


   Two cases are often cited in the cryptography community as
examples of the mismatch between the current export control
regime and the current state of cryptographic technology (Box
4.12). Moreover, they are often used as evidence that the
government is harassing innocent law-abiding citizens.

   Taken by themselves and viewed from the outside, both of
the cases outlined in Box 4.12 suggest an approach to national
security with evident weaknesses. In the first instance,
accepting the premise that programs for cryptography cannot
appear on the Internet because a foreigner might download them
seems to challenge directly the use of the Internet as a forum
for exchanging information freely even within the United
States. Under such logic (claim the critics), international
telephone calls would also have to be shut down because a U.S.
person might discuss cryptography with a foreign national on
the telephone. In the second instance, the information
contained in the book (exportable) is identical to that on the
disk (not exportable). Since it is the information about
cryptography that is technically at issue (the export control
regulations make no mention of the medium in which that
information is represented), it is hard to see why one would
be exportable and the other not.

   On the other hand, taking the basic assumptions of the
national security perspective as a given, the decisions have
a certain logic that is not only the logic of selective
prosecution or enforcement.

   +    In the case of Zimmermann, the real national security
issue is not the program itself, but rather the fact that a
significant PGP user base may be developing. Two copies of a
good encryption program distributed abroad pose no plausible
threat to national security. But 20 million copies might well
pose a threat. However, the export control regulations as
written do not mention potential or actual size of the user
base, and so the only remaining leverage is the broad language
that brings cryptography under the export control laws.

   +     In the case of Schneier, the real national security
issue relates to the nature of any scheme intended to deny
capabilities to an adversary. Typing the book's source code
into the computer is an additional step that an adversary must
take to implement a cryptography program and a step at which
an adversary could make additional errors. No approach to
denial can depend on a single "silver bullet"; instead, denial
rests on the erection of multiple barriers, all of which taken
together are expected to result in at least a partial denial
of a certain capability. Moreover, if one begins from the
premise that export controls on software encryption represent
appropriate national policy, it is clear that allowing the
export of the source code to Schneier's book would set a
precedent that would make it very difficult to deny permission
for the export of other similar software products with
encryption capabilities. Finally, the decision is consistent
with a history of commodity jurisdiction decisions that
generally maintains USML controls on the source code of a
product whose object code implementation of confidentiality
has been granted commodity jurisdiction to the CCL.

   These comments are not intended to excoriate or defend the
national security analysis of these cases. But the controversy
over these cases does suggest quite strongly that the
traditional national security paradigm of export controls on
cryptography (one that is biased toward denial rather than
approval) is stretched greatly by current technology. Put
differently, when the export control regime is pushed to an
extreme, it appears to be manifestly ridiculous.


                          4.9 RECAP


   Current export controls on products with encryption
capabilities are a compromise between (1) the needs of
national security to conduct signals intelligence and (2) the
needs of U.S. and foreign businesses operating abroad to
protect information and the needs of U.S. information
technology vendors to remain competitive in markets involving
products with encryption capabilities that might meet these
needs. These controls have helped to delay the spread of
strong cryptographic capabilities and use of those
capabilities throughout the world, to impede the development
of standards for cryptography that would facilitate such a
spread, and to give the U.S. government a tool for monitoring
and influencing the commercial development of cryptography.
Export controls have clearly been effective in limiting the
foreign availability of products with strong encryption
capabilities made by U.S. manufacturers, although enforcement
of export controls on certain products with encryption
capabilities appears to have created many public relations
difficulties for the U.S. government, and circumventions of
the current regulations appear possible. The dollar cost of
limiting the availability of cryptography abroad is hard to
estimate with any kind of confidence, since even the
definition of what counts as a cost is quite fuzzy. At the
same time, a floor of a few hundred million dollars per year
for the market affected by export controls on encryption seems
plausible, and all indications are that this figure will only
grow in the future.

   A second consideration is the possibility that export
controls on products with encryption capabilities may well
have a negative impact on U.S. national security interests by
stimulating the growth of important foreign competitors over
which the U.S. government has less influence, and possibly by
damaging U.S. competitive advantages in the use and
development of information technology. In addition, the export
control regime is clouded by uncertainty from the vendor
standpoint, and there is a profound mismatch between the
perceptions of government/national security and those of
vendors on the impact of the export control regime. Moreover,
even when a given product with encryption capabilities may be
acceptable for export on national security grounds,
nonnational security considerations may play a role in
licensing decisions.

   Partly in response to expressed concerns about export
controls, the export regime has been gradually loosened since
1983. This relaxation raises the obvious question of how much
farther and in what directions such loosening could go without
significant damage to national security interests. This
subject is addressed in Chapter 7.

____________________________________________________________

                BOX 4.1 Enforcing Compliance
                   with End-Use Agreements


   In general, a U.S. Munitions List (USML) license is granted
to a U.S. exporter for the shipping of a product, technical
data, or service covered by the USML to a particular foreign
recipient for a set of specified end uses and subject to a
number of conditions (e.g., restrictions on reexport to
another nation, nontransfer to a third party). The full range
of ITAR sanctions is available against the U.S. exporter and
the foreign recipient outside the United States.

   The ITAR specify that as a condition of receiving a USML
license, the U.S. exporter must include in the contract with
the foreign recipient language that binds the recipient to
abide by all appropriate end-use restrictions. Furthermore,
the U.S. exporter that does not take reasonable steps to
enforce the contract is subject to ITAR criminal and civil
sanctions. But how can end-use restrictions be enforced for a
foreign recipient?

   A number of sanctions are available to enforce the
compliance of foreign recipients of USML items exported from
the United States. The primary sanctions available are the
criminal and civil liabilities established by the Arms Export
Control Act (AECA); the foreign recipient can face civil
and/or criminal charges in U.S. federal courts for violating
the AECA. Although different U.S. courts have different views
on extraterritoriality claims asserted for U.S. Iaw, a
criminal conviction or a successful civil lawsuit could result
in the imposition of criminal penalties on individuals
involved and/or seizure of any U.S. assets of the foreign
recipient. (When there are no U.S. assets, recovering fines or
damages can be highly problematic, although some international
agreements and treaties provide for cooperation in such
cases.) Whether an individual could be forced to return to the
United States for incarceration would depend on the existence
of an appropriate extradition treaty between the United States
and the foreign nation to whose jurisdiction the individual is
subject.

   A second avenue of enforcement is that the foreign
recipient found to be in violation can be denied all further
exports from the United States. In addition, the foreign
violator can be denied permission to compete for contracts
with the U.S. government. From time to time, proposals are
made to apply sanctions against violators that would deny
privileges for them to export products to the United States,
though such proposals often create political controversy.

   A third mechanism of enforcement may proceed through
diplomatic channels. Depending on the nation to whose
jurisdiction the foreign recipient is subject, the U.S.
government may well approach the government of that nation to
seek its assistance in persuading or forcing the recipient to
abide by the relevant end-use restrictions.

   A fourth mechanism of enforcement is the sales contract
between the U.S. exporter and the foreign recipient, which
provides a mechanism for civil action against the foreign
recipient. A foreign buyer who violates the end-use
restrictions is in breach of contract with the U.S. exporter,
who may then sue for damages incurred by the U.S. company.
Depending on the language of the contract, the suit may be
carried out in U.S. or foreign courts; alternatively, the
firms may submit to binding arbitration.

   The operation of these enforcement mechanisms can be
cumbersome, uncertain, and slow. But they exist, and they are
used. Thus, while some analysts believe that they do not
provide sufficient protection for U.S. national security
interests, others defend them as a reasonable but not perfect
attempt at defending those interests.

____________________________________________________________

               BOX 4.2 Licensing Relaxations
              on Cryptography: A Short History


   Prior to 1983, all cryptography exports required individual
license from the State Department. Since then, a number of
changes have been proposed and mostly implemented.


Year    Change
_____________________________________________________________

1983    Distribution licenses established allowing exports to
        multiple users under a single license 1987
        Nonconfidentiality products moved to Department of
        Commerce (DOC) on a case-by-case basis 1990 ITAR
        amended -- all nonconfidentiality products under DOC
        jurisdiction

1990    Mass-market general-purpose software with encryption
        for confidentiality moved to DOC on case-by-case basis

1992    Software Publishers Association agreement providing
        for 40-bit RC2/RC4-based products under DOC
        jurisdiction

1993    Mass-market hardware products with encryption
        capabilities moved to DOC on case-by-case basis

1994    Reforms to expedite license processing at Department
        of State

1995    Proposal to move to DOC software products with 64-bit
        cryptography for confidentiality with "properly
        escrowed" keys

1996    "Personal use" exemption finalized
__________

SOURCE: National Security Agency.

____________________________________________________________

           BOX 4.3 Important Differences Between
                the U.S. Munitions List and
                 the Commodity Control List

____________________________________________________________

For Items on U.S.            For Items of Commerce
Munitions List (USML):       Control List (CCL):
____________________________________________________________

Department of State has      Department of Commerce may
broad leeway to take         limit exports only to the
national security            extent that they would make "a
considerations into          significant contribution to the
account in licensing         military potential of any other
decisions; indeed, national  country which would prove
security and foreign         detrimental to the national
policy considerations        security of the United States."
are the driving force        or "where necessary to further
behind the Arms Export       significantly the foreign policy
Control Act.                 of the United States."
                             The history of the Export
                             Administration Act strongly
                             suggests that its national
                             security purpose is to deny dual-
                             use items to countries of
                             Communist Block nations, nations
                             of concern with respect to
                             proliferation of weapons of mass
                             destruction, and other rogue
                             nations.

Items are included on the    Performance parameters rather
USML if the item is          than broad categories define
"inherently military in      included items.
character"; the end use
is irrelevant in such
a determination. Broad
categories of product are
included.

Decisions about export can   Decisions about export must be
take as long as necessary.   completed within 120 days.

Export licenses can be       Export licenses can be denied
denied on very general       only on very specific
grounds (e.g., the export    grounds (e.g., high
would be against the U.S.    likelihood of diversion to
national interest).          proscribed nations).

Individually validated       General licenses are often
licenses are generally       issued, although general
required, although           licenses do not convey
distribution and bulk        blanket authority for export
licenses are possible        (see Note 2 below).
(see Note I below).

Prior government approval    Prior government approval is
is needed for export.        generally not needed for export.

Licensing decisions are not  Licensing decisions are subject
subject to judicial review.  to judicial review by a federal
                             judge or an administrative law
                             judge.

Foreign availability may     Foreign availability of items
or may not be a              that are substantially
consideration in granting    equivalent is, by law,
a license at the discretion  a consideration in a licensing
of the State Department.     decision.

Items included on the        Items included on the CCL must
USML are not subject         be reviewed periodically.
to periodic review.

A Shipper's Export           An SED may be required, unless
Declaration (SED)            exemption from the requirement
is required in all           is granted under the Export
instances.                   Administration Regulations.

____________________________________________________________

   Note 1: Bulk licenses authorize multiple shipments without
requiring individual approval. Distribution licenses authorize
multiple shipments to a foreign distributor. In each case,
record-keeping requirements are imposed on the vendor. In
practice, a distribution license shifts the burden of export
restrictions from vendor to distributor. Under a distribution
license, enforcement of restrictions on end use and on
destination nations and post-shipment record-keeping
requirements are the responsibility of the distributor;
vendors need not seek an individual license for each specific
shipment.

   Note 2: Even if an item is controlled by the CCL, U.S.
exporters are not allowed to ship such items if the exporter
knows that it will be used directly in the production of
weapons of mass destruction or ballistic missiles by a certain
group of nations. Moreover, U.S. exports from the CCL are
prohibited entirely to companies and individuals on a list of
"Specially Designated Nationals" designated as agents of Cuba,
Libya, Iraq, North Korea, or Yugoslavia or to a list of
companies and individuals on the Bureau of Export
Administration's Table of Denial Orders (including some
located in the United States and Europe).

____________________________________________________________

               BOX 4.4 Categorical Exceptions
           on the USML for Products Incorporating
            Cryptography and Informal Practices
                     Governing Licensing


                   Categorical Exemptions

The ITAR provide for a number of categorical exemptions,
including:

   +     Mass-market software products that use 40-bit key
lengths with the RC2 or RC4 algorithm for confidentiality.
(See Note I below.)

   +    Products with encryption capabilities for
confidentiality (of any strength) that are specifically
intended for use only in banking or money transactions.
Products in this category may have encryption of arbitrary
strength.

   +    Products that are limited in cryptographic
functionality to providing capabilities for user
authentication, access control, and data integrity.

   Products in these categories are automatically granted
commodity jurisdiction to the Commerce Control List (CCL).


               Informal Noncodified Exemptions

   The current export control regime provides for an
individual case-by-case review of USML licensing applications
for products that do not fall under the jurisdiction of the
CCL. Under current practice, certain categories of firm will
generally be granted a USML license through the individual
review process to acquire and export for its own use products
with encryption capabilities stronger than that provided by
40-bit RC2/RC4 encryption (see Note 2 below):

   +    A U.S.-controlled firm (i.e., a U.S. firm operating
abroad, a U.S.-controlled foreign firm, or a foreign
subsidiary of a U.S. firm);

   +    Banks and financial institutions (including stock
brokerages and insurance companies), whether U.S.-controlled
or owned or foreign-owned, if the products involved are
intended for use in internal communications and communications
with other banks even if these communications are not limited
strictly to banking or money transactions.

----------

   Note 1: The RC2 and RC4 algorithms are symmetric-key
encryption algorithms developed by RSA Data Security Inc.
(RSADSI). They are both proprietary algorithms, and
manufacturers of products using these algorithms must enter
into a licensing arrangement with RSADSI. RC2 and RC4 are also
trademarks owned by RSADSI, although both algorithms have
appeared on the Internet. A product with capabilities for
confidentiality will be automatically granted commodity
jurisdiction to the CCL if it meets a certain set of
requirements the most important of which are the following:

   a. The software includes encryption for data
confidentiality and uses the RC4 and/or RC2 algorithms with a
key space of 40 bits.

   b. If both RC4 and RC2 are used in the same software, their
functionality must be separate; that is, no data can be
operated on by both routines.

   c. The software must not allow the alteration of the data
encryption mechanism and its associated key spaces by the user
or by any other program.

   d. The key exchange used in the data encryption must be
based on either a public-key algorithm with a key space less
than or equal to a 512-bit modulus and/or a symmetrical
algorithm with a key space less than or equal to 64 bits.

   e. The software must not allow the alteration of the key
management mechanism and its associated key space by the user
or any other program.

   To ensure that the software has properly implemented the
approved encryption algorithm(s), the State Department
requires that the product pass a "vector test," in which the
vendor receives test data (the vector) and a random key from
the State Department, encrypts the vector with the product
using the key provided, and returns the result to the State
Department; if the product-computed result is identical to the
known correct answer, the product automatically qualifies for
jurisdiction under the CCL.

   Note that the specific technical requirements described in
this footnote are not contained in the *Federal Register*;
rather, they were described in a State Department document
whose change is not subject to an official procedure for
public comment. (These conditions were first published in
"Defense Trade News," Volume 3(4), October 1992, pages 11-15.
"Defense Trade News" is a newsletter published by the Office
of Defense Trade Controls at the Department of State.)

   Note 2: How much stronger than 40-bit RC2/RC4 is
unspecified. Products incorporating the 56-bit DES algorithm
are often approved for these informal exemptions, and at times
even products using larger key sizes have been approved. But
the key size is not unlimited, as may be the case under the
explicit categorical exemptions specified in the ITAR.

____________________________________________________________

              BOX 4.5 Successful Challenges to
                      40-bit Encryption


   In the summer of 1995, a message encoded with the 40-bit
RC4 algorithm was successfully decrypted without prior
knowledge of the key by Damien Doligez of the INRIA
organization in France. The message in question was a record
of an actual submission of form data that was sent to
Netscape's electronic shop order form in "secure" mode
(including a fictitious name and address). The challenge was
posed to break the encryption and recover the name and address
information entered in the forrn and sent securely to
Netscape. Breaking the encryption was accomplished by a
brute-force search on a network of about 120 workstations and
a few parallel computers at INRIA, Ecole Polytechnique, and
ENS. The key was found after scanning a little more than half
the key space in 8 days, and the message was successfully
decrypted. Doligez noted that many people have access to the
amount of computing power that he used, and concluded that the
exportable Secure Sockets Layer protocol is not strong enough
to resist the attempts of amateurs to decrypt a "secure"
message.

   In January 1996, an MIT undergraduate student used a single
$83,000 graphics computer to perform the same task in 8 days.
Testing keys at an average rate of more than 830,000 keys per
second, the program running on this computer would take 15
days to test every key.

____________________________________________________________

                  BOX 4.6 Difficulties in
                  Controlling Cryptography


   Hardware products with encryption capabilities can be
controlled on approximately the same basis as traditional
munitions. But software products with encryption capabilities
are a different matter. A floppy disk containing programs
involving cryptography is visually indistinguishable from one
containing any other type of program or data files.
Furthermore, software products with encryption capabilities
can be transported electronically, with little respect for
physical barriers or national boundaries, over telephone lines
and the Internet with considerable ease. Cryptographic
algorithms, also controlled by the International Traffic in
Arms Regulations as "technical data," represent pure knowledge
that can be transported over national borders inside the heads
of people or via letter.

   As is true for all other software products, software
products with encryption capabilities are infinitely
reproducible at low cost and with perfect fidelity; hence, a
controlled item can be replicated at a large number of points.
This fact explains how vast amounts of software piracy can
occur both domestically and abroad. In principle, one software
product with encryption capabilities taken abroad can serve as
the seed for an unlimited number of reproductions that can
find their way to hostile parties. Finally, it can be argued
that the rogue nations that pose the most important targets
for U.S. signals intelligence collection are also the least
likely to refrain from pirating U.S. software.

____________________________________________________________

              BOX 4.7 Key Differences Between
             Commercial Products and "Freeware"

_____________________________________________________________

                                     Products
                                     from
                                     Major
                                     Commercial   "Freeware"
                                     Vendors       Products
____________________________________________________________

Stake of reputation of               Higher         Lower
product offer

Scale of operation                   Larger         Smaller

Cost of distribution                 Higher         Lower

Support for products                 Greater        Lesser

Role of profit-making motive         Higher         Lower

Ability to integrate cryptography    Greater        Lesser
into useful and sophisticated
general-purpose software

Vulnerablity to regulatory and       Higher         Lower
legal constraints

Likelihood of market                 Higher         Lower
"staying power"

Likelihood of wide distribution      Higher         Lower
and use

Financial liability for              Higher         Lower
poor product performance

Cost of entry into markets           Higher         Lower
____________________________________________________________

NOTE: All of the characterizations listed are tendencies
rather than absolutes, and are relative (i.e. determined by
comparing products from major commercial vendors to freeware).

____________________________________________________________

                BOX 4.8 A Partial Survey of
        Foreign Encryption Products on the TIS Survey


   +    A British product manual notes that "a key can be any
word, phrase, or number from 1 to 78 characters in length,
though for security purposes keys shorter than six characters
are not recommended." Only alphanumeric characters are used in
the key, and alpha characters do not distinguish between upper
and lower case. While the longer pass phrases can produce keys
with the full 56 bits of uncertainty [changing "can" to "do"
would require more extensive tests], passwords of even six
characters are woefully inadequate. It is dangerous to allow
users to enter such keys, much less the single-character keys
allowed by this product.

   +    One British product is a DES implementation that
recommends cipher block chaining, but uses electronic codebook
(ECB) mode as the default. The use of ECB as the default is
dangerous because ECB is less secure than cipher block
chaining.

   +    A Danish product uses DES with an 8-character key, but
limits each character to alphanumeric and punctuation symbols.
Hence the key is less than a full 56 bits long. With this
restriction, many users are likely to use only upper or lower
case alpha characters, resulting in a key less than 40 bits
long.

   +    A foreign product uses the FEAL algorithm as well as
a proprietary algorithm. Aside from the question of algorithm
strength, the key is 1 to 8 characters long and does not
distinguish between upper and lower case. The result is a
ridiculously short key, a problem that is compounded by the
recommendation in the manual to use a 6- to 8-letter
artificial word as the key (e.g., it suggests that for the
name Bill, "billbum" might be used as the key).

   +    A product from New Zealand uses DES plus a public-key
system similar to RSA, but based on Lucas functions. The
public-key portion limits the key size to 1,024 bits, but does
not seem to have a lower bound, a potentially dangerous
situation. The DES key can be 1 to 24 characters in length. If
the key is 1 to 8 characters, then single DES is used,
otherwise triple DES is used. The lack of a lower bound on key
length is dangerous.

   +    An Israeli product uses DES or QUICK, a proprietary
algorithm. The minimum key length is user selectable between
0 and 8 characters. Allowing such small lower bounds on key
length is dangerous. The product also has a "super-password"
supplied by the vendor, another potentially dangerous
situation. This product is available both in hardware and in
software.

   +    A German hardware product has user-settable S-boxes,
and the key can be entered either as 8 characters or 16
hexadecimal characters to yield a true 64-bit key (which will
be reduced by the algorithm to 56 bits). The use of 16
hexadecimal character keys will result in higher security, but
if the key can also be entered as 8 alphanumeric characters,
many users are likely to do so, thus severely reducing the
security level. User-selectable S-boxes can have advantages
(if they are unknown to a cryptanalyst) and disadvantages (if
they are poorly chosen and either are known to or can be
guessed by a cryptanalyst). On balance, the danger is arguably
greater than the advantage.

   +     British product recommends one master key per
organization so that files can be shared across personal
computers. This practice is very dangerous.

   To summarize, the defects in these products are related to
poor key management practices, because they either employ or
allow poor key management that would enable a determined and
knowledgeable adversary to penetrate the security they offer
with relative ease. As noted in Section 4.2 of the text, U.S.
products are not necessarily more secure.

----------

SOURCE: Committee examination and synthesis of materials
provided by Trusted Information Systems Inc.

____________________________________________________________

             BOX 4.9 Circumventions of the ITAR


   Current export controls on cryptography can apparently be
circumvented in a number of entirely legal and/or
hard-to-detect ways. For example:

   +     U.S. company can develop a product without encryption
capabilities and then sell the source code of the product to
a friendly foreign company that incorporates additional source
code for encryption into the product for resale from that
foreign country (assuming that that country has no (or weaker)
export controls on cryptography).

   +    A U.S. company possessing products with encryption
capabilities can be bought by a foreign company; in general,
no attempt is made to recover those products.

   +    A U.S. company can work with legally independent
counterparts abroad that can incorporate cryptographic
knowledge available worldwide into products.

____________________________________________________________

              BOX 4.10 Problems Arising from a
              Lengthy Export Licensing Process


   +    Some foreign customers know it will take a long time
to obtain a positive licensing decision, and as a consequence
do not bother to approach U.S. vendors at all.

   +    Products to market are delayed; even when export
licenses are eventually granted, they are often granted too
late to be useful, because the area of information technology
is so fast-moving.

   +    Rapid decisions are not rendered. In one instance
reported to the committee, a U.S. information technology
company wanted permission to use its own software (with strong
encryption capabilities) to communicate with its foreign
offices. Such cases are in theory expedited because of a
presumptive approval in these circumstances; this vendor's
government contacts agreed that "such an application would be
no problem"' and that an approval would be a rapid
"rubber-stamp" one, but in fact, this vendor is still awaiting
a license after more than a year.

   +    System integrators intending to ship complete systems
rather than individual products face particular difficulties
in obtaining a speedy turnaround, because the task for
national security authorities involves an assessment of the
entire system into which a given product (or products) with
encryption capabilities will be integrated, rather than an
assessment of just the products with encryption capabilities
alone.

   +    Even vendors that manufacture cryptographic software
not intended for export are required to register with the
State Department Office of Defense Trade Controls, primarily
"to provide the U.S. government with necessary information on
who is involved in certain manufacturing and exporting
activities."(1)

----------

   (1)  International Traffic in Arms Regulations, Section
122.1 (c).

____________________________________________________________

                 BOX 4.11 On The Export of
           Technical Data Related to Cryptography


   "Cryptologic technical data ... refers ... only [to] such
information as is designed or intended to be used, or which
reasonably could be expected to be given direct application,
in the design, production, manufacture, repair, overhaul,
processing, engineering, development, operation, maintenance
or reconstruction of items in such categories. This
interpretation includes, in addition to engineering and design
data, information designed or reasonably expected to be used
to make such equipment more effective, such as encoding or
enciphering techniques and systems, and communications or
signal security techniques and guidelines, as well as other
cryptographic and cryptanalytic methods and procedures. It
does not include general mathematical, engineering or
statistical information, not purporting to have or reasonably
expected to be given direct application to equipment in such
categories. It does not include basic theoretical research
data. It does, however, include algorithms and other
procedures purporting to have advanced cryptologic
application.

   "The public is reminded that professional and academic
presentations and informal discussions, as well as
demonstrations of equipment, constituting disclosure of
cryptologic technical data to foreign nationals are prohibited
without the prior approval of this office. Approval is not
required for publication of data within the United States as
described in Section 125.11(a)(1). Footnote 3 to section
125.11 does not establish a prepublication review requirement.

   "The interpretation set forth in this newsletter should
exclude from the licensing provisions of the ITAR most basic
scientific data and other theoretical research information,
except for information intended or reasonably expected to have
a direct cryptologic application. Because of concerns
expressed to this office that licensing procedures for
proposed disclosures of cryptologic technical data contained
in professional and academic papers and oral presentations
could cause burdensome delays in exchanges with foreign
scientists, this office will expedite consideration as to the
application of ITAR to such disclosures. If requested, we
will, on an expedited basis provide an opinion as to whether
any proposed disclosure, for other than commercial purposes,
of information relevant to cryptology, would require licensing
under the ITAR."

----------

SOURCE: Office of Munitions Control, Department of State,
"Cryptography/Technical Data," in *Munitions Control
Newsletter*, Number 80, February 1980. (The Office of
Munitions Control is now the Office of Defense Trade
Controls.)

____________________________________________________________

             BOX 4.12 Two Export Control Cases


                   The Zimmermann PGP Case

   Philip Zimmermann is the author of a software program known
as PGP (for Pretty Good Privacy). PGP is a program that is
used to encrypt mail messages end-to-end based on public-key
cryptography. Most importantly, PGP includes a system for key
management that enables two users who have never interacted to
communicate securely based on a set of trusted intermediaries
that certify the validity of a given public key. Across the
Internet, PGP is one of the most widely used systems for
secure e-mail communication.

   Zimmermann developed PGP as a "freeware" program to be
distributed via diskette. Another party subsequently posted
PGP to a USENET newsgroup.(1) (A commercial version licensed
from but not supplied by Zimmermann has since emerged.) In
1993, Zimmermann was determined to be the target of a criminal
investigation probing possible violations of the export
control laws.(2) Zimmermann was careful to state that PGP was
not to be used or downloaded outside the United States, but of
course international connections to the Internet made for easy
access to copies of PGP located within the United States. In
January 1996, the U.S. Department of Justice closed its
investigation of Zimmermann without filing charges against
him.(3)


       The Bruce Schneier-*Applied Cryptography* Case

   Bruce Schneier wrote a book called *Applied
Cryptography*(4) that was well received in the cryptography
community. It was also regarded as useful in a practical sense
because it contained printed on its pages source code that
could be entered into a computer and compiled into a working
cryptography program. In addition, when distributed within the
United States, the book contained a floppy disk that contained
source code identical to the code found in the book. However,
when another party (Philip Karn) requested a ruling on the
exportability of the book, he (Karn) received permission to
export the book but not the disk. This decision has been
greeted with considerable derision in the academic
cryptography community, with comments such as "They think that
terrorists can't type?" expressing the general dismay of the
community.

----------

   (1)  A USENET newsgroup is in effect a mailing list to
which individuals around the world may subscribe. Posting is
thus an act of transmission to all list members.

   (2)  John Schwartz, "Privacy Program: An On-Line Weapon?,"
*Washington Post*, April 3, 1995, p. A-l.

   (3)  Elizabeth Cocoran, "U.S. Closes Investigation in
Computer Privacy Case," *Washington Post*, January 12, 1996,
p. A-11.

   (4)  Bruce Schnier, *Applied Cryptography*, John Wiley and
Sons, 1994.

____________________________________________________________

[End Chapter 4]





[Head note all pages: May 30, 1996, Prepublication Copy
Subject to Further Editorial Correction]


                              5

           Escrowed Encryption and Related Issues


   Chapter 5 describes a tool escrowed encryption -- that
responds to the needs described in Chapter 3 for exceptional
access to encrypted information. Escrowed encryption is the
basis for a number of Administration proposals that seek to
reconcile needs for information security against the needs of
law enforcement and to a lesser extent national security. As
in the case of export controls, escrowed encryption generates
considerable controversy.


              5.1 WHAT IS ESCROWED ENCRYPTION?


   The term "escrow," as used conventionally, implies that
some item of value (e.g., a trust deed, money, real property,
other physical object) is delivered to an independent trusted
party that might be a person or an organization (i.e., an
escrow agent) for safekeeping, and is accompanied by a set of
rules provided by the parties involved in the transaction
governing the actions of the escrow agent. Such rules
typically specify what is to be done with the item, the
schedule to be followed, and the list of other events that
have to occur. The underlying notion is that the escrow agent
is a secure haven for temporary ownership or possession of the
item, is legally bound to comply with the set of rules for its
disposition, functions as a disinterested extratransaction
party, and bears legal liability for malfeasance or mistakes.

   Usually, the rules stipulate that, all conditions set forth
in the escrow rules having been fulfilled, there will
eventually be delivery of the item to a specified party (e.g.,
possibly the original depositing party, an estate, a judicial
officer for custody, one or more individuals or
organizations). In any event, the salient point is that all
terms and conditions and functioning of an escrow process are,
or can be, visible to the parties involved; moreover, the
behavior and performance of formal escrow agents are governed
by legally established obligations.

   As it applies to cryptography, the term "escrow" was
introduced by the government's April 1993 Clipper initiative
in the context of encryption keys. Prior to this time, the
term "escrow" had not been widely associated with
cryptography, although the underlying concepts had been known
for some time (as described below). The Clipper initiative
promoting escrowed encryption was intended "to improve the
security and privacy of telephone communications while meeting
the legitimate needs of law enforcement."(1) In this original
context, the term "escrowed encryption" had a very specific
and narrow meaning: escrowed encryption was a mechanism that
would assure law enforcement access to the voice
communications underlying encrypted intercepts from wiretaps.

   However, during 3 years of public debate and dialogue,
"escrow," "key escrow," and "escrowed encryption" have become
terms with a much broader meaning. Indeed, many different
schemes for "escrowed encryption" are quite different from
"escrowed encryption" as the term was used in the Clipper
initiative.

   As is so often the case in computer-related matters,
terminology for escrowed systems is today not clearly
established and can be confusing or misleading. While new
terminology could be introduced in an effort to clarify
meaning, the fact is that the present policy and public and
technical dialogues all use "escrow" and "escrowed encryption"
in a very generic and broad sense. It is no longer the very
precise restricted concept embodied in the Clipper initiative
and described in Section 5.2.1. Escrow as a concept now
applies not only to the initial purpose of assuring law
enforcement access to encrypted materials, but also to
possible end-user or organizational requirements for a
mechanism to protect against lost, corrupted, or unavailable
keys. It can also mean that some process such as authority to
decrypt a header containing a session key is escrowed with a
trusted party, or it can mean that a corporation is ready to
cooperate with law enforcement to access encrypted materials.

   This report conforms to current usage, considering escrowed
encryption as a broad concept that can be implemented in many
ways; Section 5.3 addresses forms of escrowed encryption other
than that described in the Clipper initiative. Also, escrowed
encryption is only one of several approaches to providing
exceptional access to encrypted information; nonescrow
approaches to providing exceptional access are discussed in
Chapter 7.2.

   Finally, the relationship between "strong encryption" and
"escrowed encryption" should be noted. As stated above,
escrowed encryption refers to an approach to encryption that
enables exceptional access to plaintext without requiring a
third party (e.g., government acting with legal authorization,
a corporation acting in accordance with its contractual rights
vis-a-vis its employees, an individual who has lost an
encryption key) to perform a cryptanalytic attack. At the same
time, escrowed encryption can involve cryptographic algorithms
that are strong or weak and keys that are long or short. Some
participants in the public debate appear to believe that
escrowed encryption is necessarily equivalent to weak
encryption, because it does not prevent third parties from
having access to the relevant plaintext. But this is a
mischaracterization of the intent behind escrowed encryption,
since all escrowed encryption schemes proposed to date are
intended to provide very strong cryptographic confidentiality
(strong algorithms, relatively long keys) for users against
unauthorized third parties, but no confidentiality at all
against third parties who have authorized exceptional access.

----------

   (1)  See Statement by the Press Secretary, The White House,
April 16, 1993. Reprinted in David Banisar (ed.). 1994.
"Statement by the Press Secretary, The White House, April 16,
1993," *1994 Cryptography and Privacy Sourcebook*, Electronic
Privacy Information Center, Diane Publishing, Upland,
Pennsylvania, Part II. The name "Clipper" initially selected
as the name of this effort proved later to be a trademark
whose holder relinquished it to public use.

   (2)  In the more general meaning of escrowed encryption,
exceptional access refers to access to plaintext by a party
other than the originator and the recipient of encrypted
communications. For the case of stored information,
exceptional access may refer to access to the plaintext of an
encrypted file by someone not designated by the original
encryptor of the file to decrypt it or even by persons so
designated who have forgotten how to do so. See also Chapter
3.

   Contrast the meaning of third-party access in the original
Clipper context, in which third-party access refers to assured
access, under proper court authorization, by law enforcement
to the plaintext of an encrypted voice conversation. The
Clipper initiative was intended to support a system that
provided a technically convenient means to assure fulfillment
of such a requirement. Note that this meaning is much narrower
than the use of the more general term "exceptional access"
described in the previous paragraph.

____________________________________________________________


               5.2 ADMINISTRATION INITIATIVES
               SUPPORTING ESCROWED ENCRYPTION


   Since inheriting the problem of providing law enforcement
access to encrypted telephony from the outgoing Bush
Administration in late 1992, Clinton Administration officials
have said that as they considered the not-so-distant future of
information technology and information security along with the
stated needs of law enforcement and national security for
access to information, they saw three alternatives.(3)

   +    To do nothing, resulting in the possible proliferation
of products with encryption capabilities that would seriously
weaken, if not wholly negate, the authority to wiretap
embodied in the Wiretap Act of 1968 (Title III) and damage
intelligence collection for national security and foreign
policy reasons;

   +    To support an approach based on weak encryption,
likely resulting in poor security and cryptographic
confidentiality for important personal and business
information; and

   +    To support an approach based on strong but escrowed
encryption. If widely adopted and properly implemented,
escrowed encryption could provide legitimate users with high
degrees of assurance that their sensitive information would
remain secure but nevertheless enable law enforcement and
national security authorities to obtain access to
escrow-encrypted data in specific instances when authorized
under law. Moreover, the Administration hoped that by meeting
legitimate demands for better information security, escrowed
encryption would dampen the market for unescrowed encryption
products that would deny access to law enforcement and
national security authorities even when they sought access for
legitimate and lawfully authorized purposes.

   The Administration chose the last, and since April 1993,
the U.S. government has advanced a number of initiatives to
support the insertion of key escrow features into products
with encryption capabilities that will become available in the
future. These include the Clipper initiative and the Escrowed
Encryption Standard, the Capstone/Fortezza initiative, and the
proposal to liberalize export controls on products using
escrowed encryption. These initiatives raise a number of
important issues that are the focus of Sections 5.3 to 5.13.

-----------

   (3)  See, for example, statement of Raymond Kammer, deputy
director, National Institute of Standards and Technology,
before the Committee on the Judiciary, U.S. Senate, May 3,
1994. Available on line from
http://www.nist.gov/item/testimony/may94/encryp.html.

____________________________________________________________


              5.2.1 The Clipper Initiative and
              the Escrowed Encryption Standard

   As noted above, the Clipper initiative was conceived as a
way for providing legal access by law enforcement authorities
to encrypted telephony.(4) The Escrowed Encryption Standard
(EES; a Federal Information Processing Standard, FIPS-185) was
promulgated in February 1994 as the key technological
component of the Clipper initiative (Box 5.1). Specifically,
the EES called for the integration of special microeleckonic
integrated circuit chips (called "Clipper chips") into devices
used for voice communications, these chips, as one part of an
overall system, provide voice confidentiality for the user and
exceptional access to law enforcement authorities. To provide
these functions, the Clipper chip was designed with a number
of essential characteristics:

   +    Confidentiality would be provided by a classified
algorithm known as Skipjack. Using an 80-bit key, the Skipjack
algorithm would offer considerably more protection against
brute-force attacks than the 56-bit DES algorithm (FIPS 46-1).
The Skipjack algorithm was reviewed by several independent
experts, all with the necessary security clearances. In the
course of an investigation limited by time and resources, they
reported that they did not find short-cuts that would
significantly reduce the time to perform a cryptanalytic
attack below what would be required by brute force.(5)

   +    The chip would be protected against reverse
engineering and other attempts to access its technical
details.

   +    The chip would be factory-programmed with a
chip-unique secret key, the "unit key" or "device key,"(6) at
the time of fabrication. Possession of this key would enable
one to decrypt all communications sent to and from the
telephone unit in which the chip was integrated.

   +    A law enforcement access field (LEAF) would be a
required part of every transmission and would be generated by
the chip. The LEAF would contain two items: (a) the current
session key,(7) encrypted with a combination of the
device-unique unit key, and (b) the chip serial number. The
entire LEAF would itself be encrypted by a different but
secret "family key" also permanently embedded in the chip. The
family key would be the same in all Clipper chips produced by
a given manufacturer; in practice, all Clipper chips
regardless of manufacturer are programmed today by the
Mykotronx Corporation with the same family key.

   To manage the use of the LEAF, the U.S. government would
undertake a number of actions:

   +    The unit key, known at the time of manufacture and
unchangeable for the life of chip, would be divided into two
components, each of which would be deposited with and held
under high security by two trusted government escrow agents
located within the Departments of Commerce and Treasury.

   +    These escrow agents would serve as repositories for
all such materials, releasing the relevant information to law
enforcement authorities upon presentation of the unit
identification and lawfully obtained court orders.

   When law enforcement officials encountered a Clipper-
encrypted conversation on a wiretap, they would use the LEAF
to obtain the serial number of the Clipper chip performing the
encryption and the encrypted session key.(8) Upon presentation
of the serial number and court authorization for the wiretap
to the escrow agents, law enforcement officials could then
obtain the proper unit-key components, combine them, recover
the session key, and eventually decrypt the encrypted voice
communications.(9) Only one key would be required in order to
obtain access to both sides of the Clipper-encrypted
conversation. The authority for law enforcement to approach
escrow agents and request unit-key components was considered
to be that granted by Title III and the Foreign Intelligence
Surveillance Act (FISA).(10)

   As a FIPS, the EES is intended for use by the federal
government and has no legal standing outside the federal
government. Indeed, its use is optional even by federal
agencies. In other words, federal agencies with a requirement
for secure voice communications have a choice about whether or
not to adopt the EES for their own purposes. More importantly,
the use of EES-compliant devices by private parties cannot in
general be compelled by executive action alone; private
consumers are free to decide whether or not to use
EES-compliant devices to safeguard communications and are free
to use other approaches to communications security should they
so desire." However, if consumers choose to use EES-compliant
devices, they must accept key escrow as outlined in procedures
promulgated by the government. This characteristic -- that
interoperability requires acceptance of key escrow -- is a
design choice; a different specification could permit the
interoperability of devices with or without features for key
escrow.

   The EES was developed by communications security experts
from the National Security Agency (NSA), but the escrow
features of the EES are intended to meet the needs of law
enforcement -- i.e., its needs for clandestine surveillance of
electronic and wire communications as described in Chapter 3.
NSA played this development role because of its technical
expertise. EES-compliant devices are also approved for
communicating classified information up to and including
SECRET. In speaking with the committee, Administration
officials described the Clipper initiative as more or less
irrelevant to the needs of signals intelligence (SIGINT) (Box
5.2).

   As of early 1996, AT&T has sold about 10,000 to 15,000
units of the Surity Telephone Device 3600. These include four
configurations: Model C, containing only the Clipper chip,
which has been purchased primarily by U.S. government
customers; Model F, containing only an AT&T-proprietary
algorithm that is exportable; Model P, containing an
AT&T-proprietary nonexportable algorithm in addition to the
exportable algorithm; and Model S, with all three of the
above. Only units with the Clipper chip have a key-escrow
feature. All the telephones are interoperable -- they
negotiate with each other to settle on a mutually available
algorithm at the beginning of a call.(12) In addition, AT&T
and Cycomm International have agreed to jointly develop and
market Clipper-compatible digital voice encryption attachments
for Motorola's Micro-Tac series of handheld cellular
telephones; these products are expected to be available in the
second quarter of 1996.(13) Finally, AT&T makes no particular
secret of the fact that its Surity line of secure voice
communication products employs Clipper chip technology, but
that fact is not featured in the product literature; potential
consumers would have to know enough to ask a knowledgeable
sales representative.

----------

   (4)  Dorothy Denning and Miles Smid, "Key Escrowing Today,"
*IEEE Communications*, Volume 32, 1994, pp. 58-68. Available
on-line from http://www.cosc.georgetown.edu/~denning/
crypto/Key-Escrowing-Today.txt.

   (5)  See Ernest Brickell et al., *SKIPJACK Review: Interim
Report*, July 28, 1993. Posted to the "sci.crypt" newsgroup on
August 1, 1993, by Dorothy Denning and available from
http://www.cosc.georgetown.edu/~denning/SKIPJACK.txt.
Reprinted in Lance Hoffman (ed.), *Building in Big Brother:
The Cryptographic Policy Debate*, Springer-Verlag, New York,
1995, pp. 119-130.

   (6)  The device key or unit key is used to open the
encryption that protects a session key. Hence, possession of
the unit key allows the decryption of all messages or files
encrypted with that unit or device. "Session key" is defined
in footnote 7.

   (7)  "Session," as in computer science, denotes a period of
time during which one or more computer-based processes are
operational and performing some function; typically two or
more of systems, end users, or software processes are involved
in a session. It is analogous to a meeting among these things.
For cryptography, a session is the plaintext data stream on
which the cryptographic process operates. The session key is
the actual key that is needed to decrypt the resulting
ciphertext. In the context of an encrypted data transmission
or telephone call, the session key is the key needed to
decrypt the communications stream. For encrypted data storage,
it is the key needed to decrypt the file. Note that in the
case of symmetric encryption (discussed in Chapter 2), the
decryption key is identical to the encryption key. Since
asymmetric encryption for confidentiality is efficient only
for short messages or files, symmetric encryption is used for
session encryption of telephony, data transmissions, and data
storage.

   (8)  Because the family key would be known to law
enforcement officials, obtaining the unencrypted LEAF would
present no problems.

   (9)  Questions have arisen about NSA access to escrowed
keys. NSA has stated for the record to the committee that "key
escrow does not affect either the authorities or restrictions
applicable to NSA's signals intelligence activities. NSA's
access to escrowed keys will be tied to collection against
legitimate foreign intelligence targets. The key holder must
have some assurance that NSA is involved in an authorized
intelligence collection activity and that the collection
activity will be conducted in accordance with the appropriate
restrictions." For a description of these restrictions, see
Appendix D of this report.

   (10) Dorothy Denning and Miles Smid, "Key Escrowing Today,"
*IEEE Communications*, Volume 32, 1994, pp. 58-68. Available
on-line from http://www.cosc.georgetown.edu/~denning/
crypto/Key-Escrowing-Today.txt.

   Given its initial intent to preserve law enforcement's
ability to conduct wire taps, it follows that Clipper key
escrow would be conducted without the knowledge of parties
whose keys had been escrowed, and would be conducted by a set
of rules that would be publicly known but not changeable by
the affected parties. Under the requirements of Title III, the
affected parties would be notified of the tapping activity at
its conclusion, unless the information were to become the
basis for a criminal indictment or an ongoing investigation.
In the latter case, the accused would learn of the wiretaps,
and hence the law enforcement use of escrowed keys, through
court procedures.

   (11) For example, an opinion issued by the Congressional
Research Service argues that legislation would be required to
mandate the use of the Clipper chip beyond Federal computer
systems. Memorandum from the American Law Division,
Congressional Research Service, *Current Legal Authority to
Mandate Adoption of "Clipper Chip" Standards By Private
Parties*, Library of Congress, Washington, D.C., October 4,
1994.

   (12) AT&T Secure Communications product literature,
available on-line from http://www.att.com/press/0694/
940613.pdb.html, and personal communication with Bruce Bailey,
AT&T Secure Communications Systems, Greensboro, N.C., March
29, 1996.

   (13) AT&T News Release, "AT&T, Cycomm International Develop
Digital Voice Encryption," November 1, 1995. Available on-line
from http://www.att.com/press/1195/951101.mma.html.

____________________________________________________________


         5.2.2 The Capstone/Fortezza Initiative(14)

   The Capstone/Forteza effort supports escrowed encryption
for data storage and communications, although a FIPS for this
application has not been issued. Specifically, the Capstone
chip is an integrated-circuit chip that provides a number of
encryption services for both stored computer data and data
communications. For confidentiality, the Capstone chip uses
the Skipjack algorithm, the same algorithm that is used in the
Clipper chip (which is intended only for voice communications,
including low-speed data and fax transmission across the
public switched telephone network, and the same mechanism to
provide for key escrowing. The agents used to hold Capstone
keys are also identical to those for holding Clipper keys --
namely the Departments of Treasury and Commerce. In addition,
the Capstone chip (in contrast to the Clipper chip) provides
services that conform to the Digital Signature Standard
(FIPS-186) to provide digital signatures that authenticate
user identity and the Secure Hash Standard (FIPS-180); the
chip also implements a classified algorithm for key exchange
(usually referred to as the Key Exchange Algorithm (KEA)) and
a random number generator.

   The Capstone chip is the heart of the Forteza card.(15) The
Fortezza card is a PC-card (formerly known as a PCMCIA card)
intended to be plugged into any computer with a PC-card
expansion slot and appropriate support software; with the card
in place, the host computer is able to provide reliable user
authentication and encryption for confidentiality and certify
data-kansmission integrity in any communication with any other
computer so equipped. The Fortezza card is an example of a
hardware token that can be used to ensure proper
authentication.(16) Note also that there are other hardware PC
cards that provide cryptographic functionality similar to that
of Forteza but without the escrow features.(17)

   To date, the NSA has issued two major solicitations for
Fortezza cards, the second of which was for 750,000 cards.(18)
These cards will be used by those on the Defense Messaging
System, a communications network that is expected to
accommodate up to 2 million Defense Department users in 2005.
In addition, Fortezza cards are intended to be available for
private sector use. The extent to which Fortezza cards will be
acceptable in the commercial market remains to be seen,
although a number of product vendors have decided to
incorporate support for Fortezza cards in some products.(19)

----------

   (14) Technically speaking, Clipper and Capstone/Fortezza
are not separate initiatives. The Capstone program had been
under way for a number of years prior to the public
announcement of the Clipper chip in 1993, and the Clipper chip
is based entirely on technology developed under the Capstone
program. The Clipper chip was developed when the incoming
Clinton Administration felt it had to address the problem of
voice encryption. However, while Clipper and Capstone/Fortezza
are not technically separate programs, the public debate has
engaged Clipper to a much greater degree than it has Capstone.
For this reason, this report discusses Clipper and
Capstone/Fortezza separately.

   (15) The Fortezza card was previously named the Tessera
card; the name was changed when previous trademark claims on
"Tessera" were discovered.

   (16) To ensure that the holder of the Fortezza card is in
fact the authorized holder, a personal identification number
(PIN) is associated with the card: only when the proper PIN is
entered will the Fortezza card activate its various functions.
While concerns have been raised in the security literature
that passwords and PINs are not secure when transmitted over
open communications lines, the PIN used by the Fortezza card
is never used outside of the confines of the user's system.
That is, the PIN is never transmitted over any network link;
the sole function of the PIN is to turn on the Fortezza card,
after which an automated protocol ensures secure
authentication.

   (17) For example, such devices are made by Cylink and
Telequip. See *Government Computer News*, "Security Device is
007 in Your Pocket," August 7, 1995, p. 6.

   (18) Paul Constance, "DoD Plans to Install 750,000 Fortezza
Cards," *Government Computer News*, July 31, 1995, p. 1 for
the solicitation.

   (19) For example, the Netscape Communications Corporation
has announced that it will support Fortezza in the next
version of its Web browser, while the Oracle Corporation will
support Fortezza in the next version of its Secure Network
Services product. See Elizabeth Sikorovsky, "Netscape and
Oracle Products Support Fortezza Card," *Federal Computer
Week*, October 23, 1995, p. 36.

____________________________________________________________


         5.2.3 The Relaxation of Export Controls on
         Software Products Using "Properly Escrowed"
                      64-bit Encryption

   As noted in Chapter 4, the Administration has proposed to
treat software products using a 64-bit encryption key as it
currently treats products with encryption capabilities that
are based on a 40-bit RC2 or RC4 algorithm, providing that
products using this stronger encryption are "properly
escrowed." This change is intended to facilitate the global
sale of U.S. software products with significantly stronger
cryptographic protection than is available from U.S. products
sold abroad today.

   To work out the details of what is meant by "properly
escrowed," the National Institute of Standards and Technology
held workshops in September and December 1995 at which the
Administration released a number of draft criteria for export
control (Box 5.3). These criteria are intended to ensure that
a product's key escrow mechanism cannot be readily altered or
bypassed so as to defeat the purposes of key escrowing. The
Administration has expressed its intent to move forward
rapidly with its proposal and seeks to finalize export
criteria and make formal conforming modifications to the
export regulations "soon" (at the time of this writing by
early 1996).


   5.2.4 Other Federal Initiatives in Escrowed Encryption

   In addition to the initiatives described above, the
Administration has announced plans for new Federal Information
Processing Standards in two other areas:

   +    FIPS-185 will be modified to include escrowed
encryption for data in both communicated and stored forms. The
modified FIPS is expected in late 1996; how this modification
will relate to Capstone/Fortezza is as yet uncertain.

   +    A FIPS for key escrow will be developed that will,
among other things, specify performance requirements for
escrow agents and for escrowed encryption products. How this
relates to the existing or modified FIPS-185 is also uncertain
at this time.



         5.3 OTHER APPROACHES TO ESCROWED ENCRYPTION


   A general concept akin to escrowed encryption has long been
familiar to some institutions, notably banks, that have for
years purchased information systems allowing them to retrieve
the plaintext of encrypted files or other stored information
long after the immediate need for such information has
passed.(20) However, only since the initial announcement of
the Clipper initiative in April 1993 has escrowed encryption
gained prominence in the public debate.

   Denning and Branstad describe a number of different
approaches to implementing an escrowed encryption scheme, all
of which have been discussed publicly since 1993.(21) Those
and other different approaches vary along the dimensions
discussed below:

   +    Number of escrow agents required to provide
exceptional access. For example, one proposal called for
separation of Clipper unit keys into more than two
components.(22) A second proposal called for the k-of-n
arrangement described in Section 5.9.1.

   +    Affiliation of escrow agents. Among the possibilities
are government in the executive branch, government in the
judicial brarlch, commercial institutions, product
manufacturers, and customers.

   +    Ability of parties to obtain exceptional access. Under
the Clipper initiative, the key-escrowing feature of the EES
is available only to law enforcement authorities acting under
court order; users never have access to the keys.

   +    Authorities vested in escrow agents. In the usual
discussion, escrow agents hold keys or components of keys. But
in one proposal, escrow agents known as Data Recovery Centers
(DRCs) do not hold user keys or user key components at all.
Products escrowed with a DRC would include in the ciphertext
of a transmission or a file the relevant session key encrypted
with the public key of that DRC and the identity of the DRC in
plaintext. Upon presentation of an appropriate request (e.g.,
valid court order for law enforcement authorities, a valid
request by the user of the DRC-escrowed product), the DRC
would retrieve the encrypted session key, decrypt it, and give
the original session key to the authorized third party, who
could then recover the data encrypted with that key.(23)

   +    Hardware vs. software implementation of products.

   +    Partial key escrow.(24) Under a partial key escrow, a
product with encryption capabilities could use keys of any
length, except that all but a certain number of bits would be
escrowed. For example, a key might be 256 bits long, and 216
bits (256 - 40) of the key would be escrowed, 40 bits would
remain private. Thus, decrypting ciphertext produced by this
product would require a 256-bit work factor for those without
the escrowed bits, and a 40-bit work factor for those
individuals in possession of the escrowed bits. Depending on
the number of private bits used, this approach would protect
users against disclosure of keys to those without access to
the specialized decryption facilities required to conduct an
exhaustive search against the private key (in this case, 40
bits).

   Box 5.4 describes a number of other conceptial approaches
to escrowed encryption.

----------

   (20) An example first announced in 1994 is Northern
Telecom's "Entrust," which provides for file encryption and
digital signature in a corporate network environment using RSA
public-key cryptography. "Entrust" allows master access by a
network administrator to all users' encrypted files, even
after a user has left the company. A product review for a
recent version of "Entrust" can be found in Stephen Cobb,
"Encryption for the Enterprise," *Network World*, March 11,
1996, p. 57.

   (21) All of these examples are taken from Dorothy Denning
and Dennis Branstad, "A Taxonomy of Key Escrow Encryption,"
*Communications of the ACM*, Volume 39, March, 1996.

   (22) This comment was probably made during the meetings of
July, August, and September, 1993 by the Computer System
Security and Privacy Advisory Board to solicit public views on
the Clipper initiative. Transcripts of the meetings are
available from the National Institute of Standards and
Technology.

   (23) Stephen T. Walker et al., *Commercial Key Escrow:
Something for Everyone Now and for the Future*, Report #541,
Trusted Information Systems, Glenwood, Md., January, 1995.

   (24) Adi Shamir, "Partial Key Escrow: A New Approach to
Software Key Escrow," summary of presentation at NIST FIPS Key
Escrow Workshop, National Institute of Standards and
Technology, Gaithersburg, Md., September 15, 1995. Available
on-line at http://reality.sgi.com/employees/chrisr_corp/
pkedc.html.

____________________________________________________________


            5.4 THE IMPACT OF ESCROWED ENCRYPTION
                   ON INFORMATION SECURITY


   In the debate over escrowed encryption, the dimension of
information security that has received the largest amount of
public attention has been confidentiality. Judgments about the
impact of escrowed encryption on confidentiality depend on the
point of comparison. If the point of comparison is taken to be
the confidentiality of data available today, then the wide use
of escrowed encryption does improve confidentiality. The
reason is that most information today is entirely unprotected.

   Consider first information in transit (communications).
Most communications today are unencrypted. For example,
telephonic communications can be tapped in many different
ways, including through alligator clips at a junction box in
the basement of an apartment house or on top of a telephone
pole, off the air when some part of a telephonic link is
wireless (e.g., in a cellular call), and from the central
switching office that is carrying the call. Calls made using
EES-compliant telephones would be protected against such
surveillance, except when surveillance parties (presumably law
enforcement authorities) had obtained the necessary keys from
escrow agents. As for information in storage, most files on
most computers are unencrypted. Escrowed encryption applied to
these files would protect them against threats such as casual
snoops, although individuals with knowledge of the
vulnerabilities of the system on which those files reside
might still be able to access them.

   On the other hand, if the point of comparison is taken to
be the level of confidentiality that could be possible using
unescrowed encryption, then escrowed encryption offers a lower
degree of confidentiality. Escrowed encryption by design
introduces a system weakness (i.e., it is deliberately
designed to allow exceptional access), and so if the
procedures that protect against improper use of that access
somehow fail, information is left unprotected.(25) For
example, EES-compliant telephones would offer less
confidentiality for telephonic communications than would
telephones that could be available with the same encryption
algorithm and implementation but without the escrow feature,
since such telephones could be designed to provide
communications confidentiality against all eavesdroppers,
including rogue police, private investigators, or (and this is
the important point) legally authorized law enforcement
officials.

   More generally, escrowed encryption weakens the
confidentiality provided by an encryption system by providing
an access path that can be compromised.(26) Yet escrowed
encryption also provides a hedge against the loss of access to
encrypted data by those authorized for access; for example, a
user may lose or forget a decryption key. Assurances that
encrypted data will be available when needed are clearly
greater when a mechanism has been installed to facilitate such
access. Reasonable people may disagree about how to make that
trade-off in any particular case, thus underscoring the need
for end users themselves to make their own risk-benefit
assessments regarding the loss of authorized access (against
which escrowed encryption can protect by guaranteeing key
recovery) vs. the loss of confidentiality to unauthorized
parties (whose likelihood is increased by the use of escrowed
encryption).

   A point more specifically related to EES is that escrowed
encryption can also be used to enhance certain dimensions of
Title III protection. For example, the final procedures for
managing law enforcement access to EES-protected voice
conversations call for the hardware providing exceptional
access to be designed in such a way that law enforcement
officials would decrypt communications only if the
communications were occurring during the time window specified
in the initial court authorization. The fact that law
enforcement officials will have to approach escrow agents to
obtain the relevant key means that there will be an audit
trail for wiretaps requiring decryption, thus deterring
officials who might be tempted or able to act on their own in
obtaining a wiretap without legal authorization.

----------

   (25) Even worse, it is not just future communications that
are placed at risk, but past communications as well. For
example, if encrypted conversations are recorded and the
relevant key is not available, they are useless. However, once
the unit key is obtained, those recordings become decipherable
if they are still available. Such recording would be illegal,
because legal authorization for the wiretap would have been
necessary to obtain the key, but since these circumstances
presume a breakdown of escrow procedures in the first place,
the fact of illegality is not particularly relevant.

____________________________________________________________


            5.5 THE IMPACT OF ESCROWED ENCRYPTION
                     ON LAW ENFORCEMENT


   Box 5.5 describes the requirements for escrowed encryption
that law enforcement authorities (principally the FBI) would
like product vendors to accommodate. But two additional
high-level questions must be addressed before escrowed
encryption is accepted as an appropriate solution to the
stated law enforcement problem.


     5.5.1 Balance of Crime Enabled vs. Crime Prosecuted

   One question is the following: Does the benefit to law
enforcement from access to encrypted information through an
escrow mechanism outweigh the damage that might occur due to
the failure of procedures intended to prevent unauthorized
access to the escrow mechanism? Since government authorities
believe that the implementation of these procedures can be
made robust (and thus the anticipated expectation of failure
is slight), they answer the question in the affirmative.
Critics of government initiatives promoting escrowed
encryption raise the concern that the risk of failure may be
quite large, and thus their answer to the question ranges from
"maybe" to "strongly negative." These parties generally prefer
to rely on technologies and procedures that they fully
understand and control to maintain the security of their
information, and at best, they believe that any escrow
procedures create a potentially serious risk of misuse that
must be stringently counteracted, diligently monitored, and
legally constrained. Moreover, they believe that reliance on
government-established procedures to maintain proper access
controls on escrowed keys invites unauthorized third parties
to target those responsible for upholding the integrity of the
escrow system.

   History suggests that procedural risks materialize as real
problems over the long run,(27) but in practice, a base of
operational experience is necessary to determine if these
risks are significant.

__________

   (26) For example, if a party external to the corporation
has the keys that provide access to that corporation's
encrypted information, the corporation is more vulnerable to
a loss of confidentiality, because the external party can
become the target of theft, extortion, blackmail, and the like
by unauthorized parties who are seeking that information. Of
course, the corporation itself is vulnerable, but since only
one target (either the corporation or any external key-holding
party) needs to be compromised, more targets lead to greater
vulnerability. Of course, if keys are split among a number of
external parties, the likelihood of compromise through this
route is reduced, but the overall risk of compromise is still
increased.

   (27) See, for example, Peter Neumann, *Computer-Related
Risks*, Addison Wesley, New York, 1995; and Charles Perrow,
*Normal Accidents: Living With High-Risk Technologies*, Basic
Books, New York, 1984. Neumann describes a large number of
computer-related reliability and safety problems and security
vulnerabilities that have arisen from combinations of
defective system implementation, flawed system design, and
human error in executing procedures. Perrow describes a number
of accidents that have occurred in other domains (e.g.,
maritime shipping, air traffic control, nuclear power plant
operation) that have resulted from a similar set of problems.

____________________________________________________________


    5.5.2 Impact on Law Enforcement Access to Information

   Even if escrowed encryption were to achieve significant
market penetration and were widely deployed, the question
would still remain regarding the likely effectiveness of a law
enforcement strategy to preserve wiretapping and data recovery
capabilities through deployments of escrowed encryption built
around voluntary use.(28) This question has surfaced most
strongly in the debate over EES, but as with other aspects of
the cryptography debate, the answer depends on the scenario in
question:

   +    Many criminals will reach first for devices and tools
that are readily at hand because they are so much more
convenient to use than those that require special efforts to
obtain. Criminals who have relatively simple and
straightforward needs for secure communications may well use
EES-compliant devices if they are widely available. In such
cases, they will simply have forgotten (or not taken
sufficient conscious account of) the fact that these "secure"
devices have features that provide law enforcement access,(29)
and law enforcement officials will obtain the same level and
quality of information they currently obtain from legal
wiretaps. Indeed, the level and quality of information might
be even greater than what is available today because criminals
speaking on EES-compliant devices might well have a false
sense of security that they could not be wiretapped.

   +    Criminals whose judgment suggests the need for extra
and nonroutine security are likely to use secure
communications devices without features for exceptional
access. In these cases, law enforcement officials may be
denied important information. However, the use of these
communications devices is likely to be an ad hoc arrangement
among participants in the criminal activity. Since many
criminal activities often require participants to communicate
with people outside the immediate circle of participants,
"secondary" wiretap information might be available if
nonsecure devices were used to communicate with others not
directly associated with the activity.

   Senior Administration officials have recognized that the
latter scenario is inevitable -- it is impossible to prevent
all uses of strong unescrowed encryption by criminals and
terrorists. However, the widespread deployment of strong
encryption without features for exceptional access would mean
that even the careless criminal would easily obtain
unbreakable encryption, and thus the Administration's
initiatives are directed primarily at the first scenario.

   Similar considerations would apply to escrowed encryption
products used to store data -- many criminals will use
products with encryption capabilities that are easily
available to store files and send e-mail. If these products
are escrowed, law enforcement officials have a higher
likelihood of having access to those criminal data files and
e-mail. On the other hand, some criminals will hide or conceal
their stored data through the use of unescrowed products or by
storing them on remote computers whose location is known only
to them, with the result that the efforts of law enforcement
authorities to obtain information will he frustrated.
----------

   (28) "Voluntary" has been used ambiguously in the public
debate on key escrow. It can mean voluntary use of key escrow
in any context or implementation, or it can mean voluntary use
of EES-compliant products. In the latter situation, of course,
the key-escrow feature would be automatic. Usually, the
context of its use will clarify which interpretation
of"voluntary" is intended.

   (29) Cf. point in Chapter 2 regarding behavior of criminals
with respect to wiretapped telephone calls.

____________________________________________________________


             5.6 MANDATORY VS. VOLUNTARY USE OF
                     ESCROWED ENCRYPTION


   As noted above, the federal government cannot compel the
private sector to use escrowed encryption in the absence of
legislation, whether for voice communications or any other
application. However, EES raised the very important public
concern that the use of encryption without features for
exceptional access might be banned by statute. The
Administration has stated that it has no intention of
outlawing the use of such cryptography or of regulating in any
other way the domestic use of cryptography. Nevertheless, no
administration can bind future administrations, and Congress
can change a law at any time. More importantly, widespread
acceptance of escrowed encryption, even if voluntary, would
put into place an infrastructure that would support such a
policy change. Thus, the possibility that a future
administration and/or Congress might support prohibitions on
unescrowed encryption cannot be dismissed. This topic is
discussed in depth in Chapter 7.

   With respect to the federal government's assertion of
authority in the use of the EES by private parties, there are
a number of gray areas. For example, a federal agency that has
adopted the EES for secure telephonic communications clearly
has the right to require all contractors that interact with it
to use EES-compliant devices as a condition of doing business
with the government;(30, 31) this point is explored further in
Chapter 6. More problematic is the question of whether an
agency that interacts with the public at large without a
contractual arrangement may require such use.

   A second important gray area relates to the establishment
of EES as a de facto standard for use in the private sector
through mechanisms described in Chapter 6. In this area,
Administration officials have expressed to the committee a
hope that such would be the case. If EES-compliant devices
were to become very popular, they might well drive potential
competitors (specifically, devices for secure telephonic
communications without features for exceptional access) out of
the market for reasons of cost and scarcity. Under such
circumstances, it is not clear that initially voluntary use of
the EES would in the end leave room for a genuine choice for
consumers.

-----------

   (30) For example, at present the Department of Defense
requires that contractors acquire and employ STU-III secure
telephones for certain sensitive telephonic communication with
DOD personnel. The Federal Acquisition Regulations (FAR) were
modified to allow the costs of such telephones to be charged
against contracts, to further encourage purchase of these
telephones.

   (31) One major manufacturer noted to the committee that
meeting federal requirements for encryption also reduces its
ability to standardize on a single solution in distributed
networks. Government-mandated key escrow could differ
substantially enough from key escrow systems required for
commercial operations that two separate key escrow systems
could be needed.

____________________________________________________________


              5.7 PROCESS THROUGH WHICH POLICY
            ON ESCROWED ENCRYPTION WAS DEVELOPED


   Much criticism of the Clipper initiative has focused on the
process through which the standard was established.
Specifically, the Clipper initiative was developed out of the
public eye, with minimal if any connection to the relevant
stakeholders in industry and the academic community, and
appeared to be "sprung" on them with an announcement in the
*New York Times*. Furthermore, a coherent approach to the
international dimensions of the problem was not developed, a
major failing since business communications are global in
nature. After the announcement of the Clipper initiative, the
federal government promulgated the EES despite a
near-unanimous condemnation of the proposed standard in the
public comments on it.

   Similar comments have been expressed with respect to the
August-September 1995 Administration proposal to relax export
controls on 64-bit sofStware products if they are properly
escrowed. This proposal, advertised by the Administration as
the follow-up to the Gore-Cantwell letter of July 1994,(32)
emerged after about a year of virtual silence from the
Administration during which public interactions with industry
were minimal.

   The result has been a tainting of escrowed encryption that
inhibits unemotional discussion of its pros and cons and makes
it difficult to reach a rational and well-balanced decision.

----------

   (32) On July 20, 1994, Vice President Al Gore wrote to
Representative Maria Cantwell (D-Washington) expressing a
willingness to enter into "a new phase of cooperation among
government, industry representatives and privacy advocates
with a goal of trying to develop a key escrow encryption
system that will provide strong encryption, be acceptable to
computer users worldwide, and address our national security
needs as well." The Vice President went on to say that "we
welcome the opportunity to work with industry to design a more
versatile, less expensive system. Such a key escrow system
would be implementable in software, firmware, hardware, or any
combination thereof, would not rely upon a classified
algorithm, would be voluntary, and would be exportable.... We
also recognize that a new key escrow encryption system must
permit the use of private-sector key escrow agents as one
option... Having a number of escrow agents would give
individuals and businesses more choices and flexibility in
meeting their needs for secure communications." Letter
reprinted in Lance Hoffman (ed.), *Building in Big Brother:
The Cryptographic Policy Debate*, Springer-Verlag, New York,
1995, pp. 236-238.

____________________________________________________________


         5.8 AFFILIATION AND NUMBER OF ESCROW AGENTS


   Any deployment of escrowed encryption on a large scale
raises the question of who the escrow agents should be. (The
equally important question of their responsibilities and
liabilities is the subject of Section 5.9.) The original
Clipper/Capstone escrow approach called for agencies of the
executive branch to be escrow agents; at this writing, the
Administration's position seems to be evolving to allow
parties in the private sector to be escrow agents. Different
types of escrow agents have different advantages and
disadvantages.

   The use of executive branch agencies as escrow agents has
a number of advantages. Executive branch escrow agents can be
funded directly and established quickly, rather than depending
on the existence of a private sector market or business for
escrow agents. Their continuing existence depends not on
market forces but on the willingness of the U.S. Congress to
appropriate money to support them. Executive branch escrow
agents may well be more responsive than outside escrow agents
to authorized requests from law enforcement for keys.
Executive branch escrow agents can be enjoined more easily
from divulging to the target of a surveillance the fact that
they turned over a key to law enforcement officials, thereby
helping to ensure that a surveillance can be performed
surreptitiously. In the case of FISA intercepts, executive
branch escrow agents may be more protective of associated
classified information (such as the specific target of the
intercept). Under sovereign immunity, executive branch escrow
agents can disavow civil liability for unauthorized disclosure
of keys.

   Of course, from a different standpoint, most of these
putative advantages can be seen as disadvantages. If direct
government subsidy is required to support an escrow operation,
by definition it lacks the support of the market.(33) The high
speed with which executive branch escrow agents were
established suggested to critics that the Administration was
attempting to present the market with a fait accompli with
respect to escrow. A higher degree of responsiveness to
requests for keys may well coincide with greater disregard for
proper procedure; indeed, since one of the designated escrow
agencies (Treasury) also has law enforcement jurisdiction and
the authority to conduct wiretaps under some circumstances, a
Treasury escrow agent might well be faced with a conflict of
interest in managing keys. The obligation to keep the fact of
key disclosure secret might easily lead to circumvention and
unauthorized disclosures. The lack of civil liability and of
criminal penalties for improper disclosure might reduce the
incentives for compliance with proper procedure. Most
importantly, all executive branch workers are in principle
responsible to a unitary source of authority (the President).
Thus, concerns are raised that any corruption at the top
levels of government might diffuse downward, as exemplified by
past attempts by the Executive Office of the President to use
the Internal Revenue Service to harass its political enemies.
One result might be that executive branch escrow agents might
divulge keys improperly; a second result might be that
executive branch escrow agents could be more likely to reveal
the fact of key disclosure to targets in the executive branch
under investigation.

   Some of the concerns described above could be mitigated by
placement of escrow agents in the judiciary branch of
government on the theory that since judicial approval is
needed to conduct wiretaps, giving the judiciary control of
escrowed keys would in fact give it a way of enforcing the
Title III requirements for legal authorization. On the other
hand, the judiciary branch would have to rule on procedures
and misuse, thereby placing it at risk of a conflict of
interest should alleged misdeeds in the judiciary branch come
to light. Matters related to separation of powers between the
executive and judicial branches of government are also
relevant.

   The best argument for government escrow agents is that
government can be held politically accountable. When a
government does bad things, the government can be replaced.
Escrow agents must be trustworthy, and the question at root is
whether it is more appropriate to trust government or a
private party; the views on this point are diverse and often
vigorously defended.

   The committee believes that government-based escrow agents
present few problems when used to hold keys associated with
government work. Nonetheless, mistrust of government-based
escrow agents has been one of the primary criticisms of the
EES. If escrowed encryption is to serve broad social purposes
across government and the private sector, it makes sense to
consider other possible escrow agents in addition to
government escrow agents:

   +    Private organizations established to provide key
registration services (on a fee-for-service basis). Given that
some business organizations have certain needs for data
retrieval and monitoring of communications as described in
Chapter 3, such needs might create a market for private escrow
agents. Some organizations might charge more and provide users
with bonding against failure or improper revelations of keys;
other organizations might charge less and not provide such
bonding.

   +    Vendors of products with encryption capabilities and
features for exceptional access. Vendors acting as escrow
agents would face a considerable burden in having to comply
with registration requirements and might be exposed to
liability.(34) At the same time, vendors could register keys
at the time of manufacture or by default at some additional
expense.(35)

   +    Customers themselves. In the case of a corporate
customer, a specially trusted department within the
corporation that purchases escrowed encryption products could
act as an escrow agent for the corporation. Such "customer
escrow" of a corporation's own keys may be sufficient for its
needs; customer escrow would also enable the organization to
know when its keys have been revealed. Since legal entities
such as corporations will continue to be subject to extant
procedures of the law enforcement court order or subpoena, law
enforcement access to keys under authorized circumstances
could be assured. In the case of individual customers who are
also the end users of the products they purchase, the
individual could simply store a second copy of the relevant
keys as a form of customer escrow.

   Note especially that site licenses(36) to corporations
account for the largest portion of vendor sales in
software.(37) In a domestic context, corporations are entities
that are subject to legal processes in the United States that
permit law enforcement authorities to obtain information in
the course of a criminal investigation. In a foreign context,
exports to certain foreign corporations can be conditioned on
a requirement that the foreign corporation be willing to
escrow its key in such a manner that U.S. law enforcement
authorities would be able to have access to that information
under specified circumstances and in a manner to be determined
by a contract binding on the corporation. (The use of contract
law in this manner is discussed further in Chapter 7.) In
short, sales of escrowed encryption to foreign and domestic
corporate users could be undertaken in such a way that a very
large fraction of the installed user base would in fact be
subject to legal processes for obtaining information on keys.

   Nongovernment escrow agents are subject to the laws of the
government under whose jurisdiction they operate. In addition,
they raise other separate questions. For example, a criminal
investigation may target the senior officials of a
corporation, who may themselves be the ones authorized for
access to customer-escrowed keys; they might then be notified
of the fact of being wiretapped. The same would be true of end
users controlling their own copies of keys. Private
organizations providing key-holding services might be
infiltrated or even set up by criminal elements that would
frustrate lawful attempts to obtain keys or would even use the
keys in their possession improperly. Private organizations may
be less responsive to government requests than government
escrow agents. Finally, private organizations motivated by
profit and tempted to cut corners might be less responsible in
their conduct.

   A second important issue regarding escrow agents deals with
their number. Concentrating escrow arrangements in a few
escrow agents may make law enforcement access to keys more
convenient, but also focuses the attention of those who may
attempt to compromise those facilities -- the "big, fat
target" phenomenon -- because the aggregate value of the keys
controlled by these few agents is, by assumption, large.(38)
On the other hand, given a fixed budget, concentrating
resources on a few escrow agents may enable them to increase
the security against compromise, whereas spreading resources
among many escrow agents may leave each one much more open to
compromise. Indeed, the security of a well-funded and
well-supported escrow agent may be greater than that of the
party that owns the encryption keys; in this case, the
incremental risk that a key would be improperly compromised by
the escrow agent would be negligible. Increasing the number of
escrow agents so that each would be responsible for a
relatively small number of keys reduces the value of
compromising any particular escrow agent but increases the
logistical burdens, overhead, and expense for the nation. The
net impact on security against compromise of keys is very
scenario-dependent.(39)

----------

   (33) The original Clipper/Capstone proposal made no
provision for parties other than law enforcement authorities
to approach escrow agents, and in this context could be
regarded as a simple law enforcement initiative with no
particular relevance to the private sector. However, in light
of the Administration's arguments concerning the desirability
of escrowed encryption to meet the key backup needs of the
private sector, the importance of relevance to the private
sector is obvious.

   (34) For example, in the early days of an offering by AT&T
to provide picture-phone meeting services, the question arose
as to whether AT&T or the end user should provide security.
The business decision at the time was that AT&T should not
provide security because of the legal implications -- a
company that guaranteed security but failed to provide it was
liable. (Ironically, at least one major computer vendor
declined to provide encryption services for data
communications and storage on the grounds that encryption
would be provided by AT&T.) While today's AT&T support for the
PictureTel product line for videoconferencing (which provides
encryption capabilities) may suggest a different AT&T
perspective on the issue of who is responsible for providing
security, companies will have to decide for themselves their
own tolerable thresholds of risk for liability.

   (35) The cost of vendor registration would be high in the
case of certain software products. Specifically, products that
are distributed by CD-ROM must be identical, because it would
be very expensive (relative to current costs) to ship CD-ROMs
with unique serial numbers or keys. To some extent, the same
is true of products distributed by network -- it is highly
convenient and desirable from the vendor's perspective to have
just one file that can be downloaded upon user request,
although it is possible and more expensive to provide numbered
copies of software distributed by network.

   (36) Under a site license, a corporation agrees with a
vendor on a price for a certain (perhaps variable) number of
licenses to use a given software package. Site licenses also
include agreements on and conditions for support and
documentation.

   (37) The dominance of corporate sales over sales to
individuals was cited in the NSA/Department of Commerce
survey. See p. III-2.

   (38) Note also that maintaining the physical security of
escrow agents, especially government escrow agents, may be
especially critical; sabotage or destruction of an escrow
agent facility might well be seen in some segments of society
as a blow for freedom and liberty.

   (39) A similar issue arises with respect to certificate
authorities for authentication. As discussed in Chapter 2, a
cryptography-based authentication of an individual's identity
depends on the existence of an entity -- a certification
authority -- that is trusted by third parties as being able to
truly certify the identity of the individual in question.
Concentration of certification authority into a single entity
would imply that an individual would be vulnerable to any
penetration or malfeasance of the entity and thus to all of
the catastrophic effects that tampering with an individual's
digital identity would imply.

____________________________________________________________


           5.9 RESPONSIBILITIES AND OBLIGATIONS OF
       ESCROW AGENTS AND USERS OF ESCROWED ENCRYPTION


   Regardless of who the escrow agents are, they will hold
certain information and have certain responsibilities and
obligations.(40) Users of escrowed encryption also face
potential liabilities.


           5.9.1 Partitioning Escrowed Information

   Consider what precisely an escrow agent would hold. In the
simplest case, a single escrow agent would hold all of the
information needed to provide exceptional access to encrypted
information. (In the Clipper case, two escrow agents would be
used to hold the unit keys to all EES-compliant telephones.)

   A single escrow agent for a given key poses a significant
risk of single-point failure -- that is, the compromise of
only one party (the single escrow agent) places at risk all
information associated with that key. The Clipper/Capstone
approach addresses this point by designating two executive
branch agencies (Commerce and Treasury), each holding one
component (of two) of the unit key of a given Clipper/
Capstone-compliant device. Reconstruction of a unit key
requires the cooperation of both agencies. This approach was
intended to give the public confidence that their keys were
secure in the hands of the government.

   In the most general case, an escrow system can be designed
to separate keys into n components but with the mathematics of
the separation process arranged so that exceptional access
would be possible if the third party were able to acquire any
k (for k less than or equal to n) of these components.(41)
This approach is known as the "k-of-n" approach. For the
single escrow agent, k = 1 and n = 1; for the Clipper/Capstone
system, k = 2 and n = 2. But it is possible to design systems
where k is any number less than n; for example, the consent of
any three (k) of five (n) escrow agents could be sufficient to
enable exceptional access. Obviously, the greater the number
of parties that are needed to consent, the more cumbersome
exceptional access becomes.

   It is a policy or business decision as to what the specific
values of k and n should be, or if indeed the choice about
specific values should be left to users. The specific values
chosen for k and n reflect policy judgments about needs for
recovery of encrypted data relative to user concerns about
improper exceptional access. Whose needs? If a national policy
decision determines k and n, it is the needs of law
enforcement and national security weighed against user
concerns. If the user determines k and n, it is the needs of
the user weighed against law enforcement and national security
concerns.

----------

   (40) Nothing in this discussion is intended to preclude the
possibility that an organization serving as an escrow agent
might also have responsibilities as a certification authority
(for authentication purposes, as described in Chapter 2).

   (41) See for example, Silvio Micali, "Fair Public-Key
Cryptosystems," in *Advances in Cryptology -- Crypto '92*,
Springer-Verlag, Heidelberg, 1993, pp. 113-138.

____________________________________________________________


     5.9.2 Operational Responsibilities of Escrow Agents

   For escrowed encryption to play a major national role in
protecting the information infrastructure of the nation and
the information of businesses and individuals, users must be
assured about the operational obligations and procedures of
escrow agents. Clear guidelines will be required to regulate
the operational behavior of escrow agents, and clear
enforcement mechanisms must be set into place to ensure that
the escrow agents comply with those guidelines. While these
guidelines and mechanisms might come into existence through
normal market forces or cooperative agreements within
industries, they are more likely to require a legal setting
that would also include criminal penalties for malfeasance.

   Guidelines are needed to assure the public and law
enforcement agencies of two points:

   +    That information relevant to exceptional access (the
full key or a key fragment) will be divulged upon proper legal
request and that an escrow agent will not notify the key owner
of disclosure until it is legally permissible to do so, and

   +    That information relevant to exceptional access will
be divulged only upon proper legal request.

   Note that the fulfillment of the second requirement has
both an "abuse of authority" component and a technical and
procedural component. The first relates to an individual (an
"insider") who is in a position to give out the relevant
information but also to abuse his position by giving out that
information without proper authorization. The second relates
to the fact that even if no person in the employ of an escrow
agent improperly gives out the relevant information, an
"outsider" may be able to penetrate the security of the escrow
agent and obtain the relevant information without compromising
any particular individual. Such concerns are particularly
relevant to the extent that escrow agents are connected
electronically, since they would then be vulnerable in much
the same ways that all other parties connected to a network
are vulnerable. The security of networked computer systems is
difficult to assure with high confidence,(42) and the security
level required of escrow agents must be high, given the value
of their holdings to unauthorized third parties.

   Thus, those concerned about breaches of confidentiality
must be concerned about technical and procedural weaknesses of
the escrow agent infrastructure that would enable outsiders to
connect remotely to these sites and obtain keys, as well as
about insiders abusing their positions of trust. Either
possibility could lead not just to individual keys being
compromised, but also to wholesale compromise of all of the
keys entrusted to escrow agents within that infrastructure.
From a policy standpoint, it is necessary to have a
contingency plan that would facilitate recovery from wholesale
compromise.

   Box 5.6 describes law enforcement views on the
responsibilities of escrow agents. Box 5.7 describes draft
Administration views on requirements for maintaining the
integrity and security of escrow agents; Box 5.8 describes
draft Administration views on requirements for assuring access
to escrowed keys.

----------

   (42) See, for example, Computer Science and
Telecommunications Board (CSTB), National Research Council,
*Computers at Risk: Safe Computing in the Information Age*,
National Academy Press, Washington, D.C., 1991.

____________________________________________________________


               5.9.3 Liabilities of Escrow Agents

   In order to assure users that key information entrusted to
escrow agents remains secure and authorized third parties that
they will be able to obtain exceptional access to encrypted
data when necessary, escrow agents and their employees must be
held accountable for improper behavior and for the use of
security procedures and practices that are appropriate to the
task of protection.

   Liabilities can be criminal or civil (or both). For
example, criminal penalties could be established for the
disclosure of keys or key fragments to unauthorized parties or
for the refusal to disclose such information to appropriately
authorized parties. It is worth noting that the implementing
regulations accompanying the EES proposal run counter to this
position, in the sense that they do not provide specific
penalties for failure to adhere to the procedures for
obtaining keys (which only legislation could do). The
implementing regulations specifically state that "these
procedures do not create, and are not intended to create, any
substantive rights for individuals intercepted through
electronic surveillance, and noncompliance with these
procedures shall not provide the basis for any motion to
suppress or other objection to the introduction of electronic
surveillance evidence lawfully acquired."(43)

   Questions of civil liability are more complex. Ideally,
levels of civil liability for improper disclosure of keys
would be commensurate with the loss that would be incurred by
the damaged party. For unauthorized disclosure of keys that
encrypt large financial transactions, this level is
potentially very large.(44) On the other hand, as a matter of
public policy, it is probably inappropriate to allow such
levels of damages. More plausible may be a construct that
provides what society, as expressed through the U.S. Congress,
thinks is reasonable (Box 5.9). Users of escrow agents might
also be able to buy their own insurance against unauthorized
disclosure. Note that holding government agencies liable for
civil damages might require an explicit change in the Federal
Tort Claims Act that waives sovereign immunity in certain
specified instances, or other legislative changes.

   On the other hand, the amount of liability associated with
compromising information related to data communications is
likely to dwarf the analogous volume for voice communications.
If escrowed encryption is adopted widely in data
communications, compromise of escrow agents holding keys
relevant to network encryption may be catastrophic, and may
become easier as the number of access points that can be
penetrated becomes larger.

   Note that liability of escrow agents may be related to the
voluntary use of escrow. A party concerned about large
potential losses would have alternatives to escrowed
encryption -- namely, unescrowed encryption -- that would
protect the user against the consequences of improper key
disclosure. Under these circumstances, a user whose key was
compromised could be held responsible for his loss because he
did not choose to use unescrowed encryption; an escrow agent's
exposure to liability would be limited to the risks associated
with parties that use its services. On the other hand, if
escrowed encryption were the only cryptography permitted to be
used, then by assumption the user would have no alternatives,
and so in that case, an escrow agent would shoulder a larger
liability.

   Another aspect of liability could arise if the escrow
agents were also charged with the responsibilities of
certificate authorities. Under some circumstances, it might be
desirable for the functions of escrow agents and certificate
authorities to be carried out by the same organization. Thus,
these dual-purpose organizations would have all of the
liabilities carried by those who must certify the authenticity
of a given party.

----------

   (43) U.S. Department of Justice, *Authorization Procedures
for Release of Encryption Key Components in Conjunction with
Intercepts Pursuant to Title 111 and FISA*, February 4, 1994.
Reprinted in Lance J. Hoffman (ed.), *Building in Big
Brother*, 1995, pp. 243-246.

   (44) Even if these transactions are authenticated (as most
large transactions would be), large transactions that are
compromised could lead to loss of bids and the like by the
firms involved in the transaction.

____________________________________________________________


    5.10 THE ROLE OF SECRECY IN ENSURING PRODUCT SECURITY


   The fact that EES and the Fortezza card involve classified
algorithms has raised the general question of the relationship
between secrecy and the maintenance of a product's
trustworthiness in providing security. Specifically, the
Clipper/Capstone approach is based on a secret (classified)
encryption algorithm known as Skipjack. In addition, the
algorithm is implemented in hardware (a chip) whose design is
classified. The shroud of secrecy surrounding the hardware and
algorithms needed to implement EES and Fortezza makes skeptics
suspect that encrypted communications could be decrypted
through some secret "back door" (i.e., without having the
escrowed key).(45)

   Logically, secrecy can be applied to two aspects of an
encryption system: the algorithms used and the nature of the
implementation of these algorithms. Each is addressed in turn.
Box 5.10 describes a historical perspective on cryptography
and secrecy that is still valid today.

----------

   (45) A kind of de facto secret back door can result from
the fact that vendors of security products employing Clipper
or Capstone technology are not likely to advertise the fact
that the relevant encryption keys are escrowed with the U.S.
government. Thus, even if the escrowing capability is "open"
in the sense that no one involved makes any attempt to hide
that fact, a user that does not know enough to ask about the
presence or absence of escrowing features may well purchase
such products without realizing their presence. Functionally,
escrowing of which the user is ignorant is equivalent for that
user to a "secret" back door.

____________________________________________________________


                  5.10.1 Algorithm Secrecy

   The use of secret algorithms for encryption has advantages
and disadvantages. From an information security standpoint, a
third party who knows the algorithm associated with a given
piece of ciphertext has an enormous advantage over one who
does not -- if the algorithm is unknown, cryptanalysis is much
more difficult. Thus, the use of a secret algorithm by those
concerned about information security presents an additional
(and substantial) barrier to those who might be eavesdropping.
From a signals intelligence (SIGINT) standpoint, it is
advantageous to keep knowledge of good encryption out of the
hands of potential SIGINT targets. Thus, if an algorithm
provides good cryptographic security, keeping the algorithm
secret prevents the SIGINT target from implementing it. In
addition, if an algorithm is known to be good, studying it in
detail can reveal a great deal about what makes any algorithm
good or bad. Algorithm secrecy thus helps to keep such
information out of the public domain.(46)

   On the other hand, algorithm secrecy entails a number of
disadvantages as well. One is that independent analysis of a
secret algorithm by the larger community is not possible.
Without such analysis, flaws may remain in the algorithm that
compromise the security it purports to provide. If these flaws
are kept secret, users of the algorithm may unknowingly
compromise themselves. Even worse, sophisticated users who
need high assurances of security are unable to certify for
themselves the security it provides (and thus have no sense of
the risks they are taking if they use it). In most cases, the
real issue is whether the user chooses to rely on members of
the academic cryptography communities publishing in the open
literature, or upon members of the classified military
community or members of the commercial cryptography community
who are unable to fully disclose what they know about a
subject because it is classified or proprietary.

   A second disadvantage of algorithm secrecy is the fact that
if a cryptographic infrastructure is based on the assumption
of secrecy, public discovery of those secrets can compromise
the ends to be served by that infrastructure. For example, if
a cryptographic infrastructure based on a secret algorithm
were widely deployed, and if that algorithm contained a secret
and unannounced "back door" that allowed those with knowledge
of this back door easy access to encrypted data, that
infrastructure would be highly vulnerable and could be
rendered untrustworthy in short order by the public disclosure
of the back door.

   A third disadvantage is that a secret algorithm cannot be
implemented in software with any degree of assurance that it
will remain secret. Software, as it exists ready for actual
installation on a computer (so-called object code or
executable code), can usually be manipulated with special
software tools to yield an alternate form (namely, source
code) reflecting the way the creating programmer designed it,
and therefore revealing many, even most, of its operational
details, including any algorithm embedded within it. This
process is known as "decompiling" or "disassembly" and is a
standard technique in the repertoire of software
engineers.(47)

   All of the previous comments apply to secrecy whether it is
the result of government classification decisions or vendor
choices to treat an algorithm as a trade secret. In addition,
vendors may well choose to treat an algorithm as a trade
secret to obtain the market advantages that proprietary
algorithms often bring. Indeed, many applications of
cryptography for confidentiality in use today are based on
trade-secret algorithms such as RC2 and RC4.

----------

   (46) Of course, if other strong algorithms are known
publicly, the force of this argument is weakened from a
practical standpoint. For example, it is not clear that the
disclosure of Skipjack would be harmful from the standpoint of
making strong algorithms public, because triple-DES is already
publicly known, and triple-DES is quite strong.

   (47) As one example, the RC2 encryption algorithm,
nominally a trade secret owned by RSA Data Security Inc. was
posted to the Internet in early 1996, apparently as the result
of an apparent "disassembly" of a product embedding that
algorithm. Personal communication, Robert Baldwin, RSA Data
Security Inc., May 16, 1996.

____________________________________________________________


      5.10.2 Product Design and Implementation Secrecy

   Product design and implementation secrecy has a number of
advantages. For example, by obscuring how a product has been
designed, secrecy makes it more difficult for an outsider to
reverse-engineer the product in such a way that he could
understand it better or, even worse, modify it in some way.
Since vulnerabilities sometimes arise in implementation,
keeping the implementation secret makes it harder for an
attacker to discover and then exploit those vulnerabilities.
Design and implementation secrecy thus protects any secrets
that may be embedded in the product for a longer time than if
they were to be published openly.

   On the other hand, it is taken as an axiom by those in the
security community that it is essentially impossible to
maintain design or implementation secrecy indefinitely. Thus,
the question of the time scale of reverse engineering is
relevant -- given the necessary motivation, how long will it
take and how much in resources will be needed to reverse-
engineer a chip or a product?

   +    For software, reverse engineering is based on
decompilation or disassembly (as described in Section 5.10.1).
The larger the software product, the longer it takes to
understand the original program; even a small one can be
difficult to understand, especially if special techniques have
been used to obscure its functionality. Modification of the
original program can present additional technical difficulties
(the product may be designed in such a way that disassembling
or decompiling the entire product is necessary to isolate
critical features that one might wish to modify). Certain
techniques can be used to increase the difficulty of making
such modifications,(48) but there is virtual unanimity in the
computer community that modification cannot be prevented
forever. How robust must these anti-reverse-engineering
features be? The answer is that they must be robust enough
that the effort needed to overcome them is greater than the
effort needed to develop an encryption system from scratch.

   +    For hardware, reverse engineering takes the form of
physical disassembly and/or probing with x-rays of the
relevant integrated circuit chips. Such chips can be designed
to resist reverse engineering in a way that makes it difficult
to understand what various components on the chip do. For
example, the coating on a die used to fabricate a chip may be
designed so that removal of the coating results in removal of
one or more layers of the chip, thus destroying portions of
what was to be reverse-engineered. The chip may also be
fabricated with decoy or superfluous elements that would
distract a reverse engineer. For all of these reasons, reverse
engineering for understanding a chip's functions is difficult
(though not impossible), and under some circumstances, it is
possible to modify a chip. In general, reverse engineering of
the circuits and devices inside a chip requires significant
expertise and access to expensive tools.(49)

   An important factor that works against implementation
secrecy is the wide distribution of devices or products whose
implementation is secret. It is difficult to protect a device
against reverse engineering when millions of those devices are
distributed around the world without any physical barriers
(except those on the implementation itself) to control access
to them. Everyone with an EES-compliant telephone or a
Foretzza card, for example, will have access to the chip that
provides encryption and key escrow services.

   The comments above refer to the feasibility of maintaining
implementation secrecy. But there are issues related to its
desirability as well. For example, implementation secrecy
implies that only a limited number of vendors can be trusted
to produce a given implementation. Thus, foreign production of
Clipper/Capstone-compliant devices under classification
guidelines raises problems unless foreign producers are
willing to abide by U.S. security requirements.

   A more important point is that implementation secrecy also
demands trust between user and supplier/vendor. Within the
government, users within government agencies generally trust
other parts of the government to provide adequate services as
a supplier. But in the private sector, such trust is not
necessarily warranted. Users that are unable to determine for
themselves what algorithms are embedded in computer and
communications products used must trust the vendor to have
provided algorithms that do what the user wants done, and the
vast majority of users fall into this category. Such opacity
functions as a de facto mechanism of secrecy that also impedes
user knowledge about the inner workings and that is exploited
by the distributors of computer viruses and worms. As a
result, choosing between the use of self-implemented source
code and a pre-packaged program to perform certain functions
is in many ways analogous to choosing between use of
unclassified and classified algorithms.

   An information security manager with very high security
needs must make trade-offs of assurance vs. cost. In general,
the only way to be certain that the algorithms used are the
ones claimed to be used is to implement them on one's own. Yet
if the manager lacks the necessary knowledge and experience,
a self-implementation may not be as secure or as capable as
one developed by a trusted vendor. A self-implementer also
carries the considerable burden of development costs that a
commercial vendor can amortize over many sales.

   As a result, security-conscious users of products whose
inner workings are kept secret must (1) trust the vendor
implicitly (based on factors such as reputation), or (2) face
the possibility of various extreme scenarios. Here are two:

   +    The hardware of a secret device can be dynamically
modified; for example, electrically erasable read-only
memories can direct the operation of a processor. One possible
scenario with secret hardware is that a chip that initially
provides Clipper-chip functionality might be reprogrammed when
it first contacts a Clipper/Capstone-compliant device to allow
a nonescrowed but unauthorized access to it; such a means of
"infection" is common with computer viruses. In other words,
the Skipjack algorithm may have been embedded in the chip when
it was first shipped, but after the initial contact, the
algorithm controlling the chip is no longer Skipjack.

   +    An algorithm that is not Skipjack is embedded by the
manufacturer in chips purporting to be Clipper or Capstone
chips. Since the utility of a vector test depends on the
availability of an independent implementation of the
algorithm, it is impossible for the user to perform this test
independently if the user has no reference point. As a result,
the user has no access to an independent test of the chip that
is in the user's "Clipper/Capstone-compliant" device, and so
any algorithm might have been embedded.(50)

   Any technically trained person can invent many other such
scenarios. Thus, public trust in the technical desirability of
the EES and Fortezza for exceptional access depends on a high
degree of trust in the government, entirely apart from any
fears about compromising escrow agents wherever they are
situated.

   Of course, some of the same considerations go beyond the
Skipjack algorithm and the Clipper/Capstone approach. In
general, users need confidence that a given product with
encryption capabilities indeed implements a given algorithm.
Labeling a box with the letters "DES" does not ensure that the
product inside really implements DES. In this case, the fact
that the DES algorithm is publicly known facilitates testing
to verify that the algorithm is implemented correctly.(51) If
its source code is available for inspection, other
security-relevant aspects of a software product can be
examined to a certain extent, at least up to the limits of the
expertise of the person checking the source code. But for
software products without source code, and especially for
hardware products that cannot easily be disassembled, and even
more so for hardware products that are specifically designed
to resist disassembly, confidence in the nonalgorithm security
aspects of the product is more a matter of trusting the vendor
than of the user making an independent technical verification
of an implementation.(52) In some sectors (e.g., banking,
classified military applications), however, independent
technical verification is regarded as essential.

   Finally, a given product may properly implement an
algorithm but still be vulnerable to attacks that target the
part of the product surrounding the implementation of the
algorithm. Such vulnerabilities are most common in the initial
releases of products that have not been exposed to public test
and scrutiny. For example, a security problem with the
Netscape Navigator's key-generation facility could have been
found had the implementation in which the key generator was
embedded been available for public examination prior to its
release, even though the encryption algorithm itself was
properly implemented.(53)

----------

   (48) For example, Trusted Information Systems Inc. of
Glenwood, Maryland, has advocated an approach to preventing
modification that relies on the placement of integrity locks
at strategic locations. With such an approach, a change to the
disassembled source code would have to be reflected properly
in all relevant integrity locks; doing so might well involve
disassembly of an entire product rather than of just one
module of the product. Nevertheless, such an approach cannot
prevent modification, although it can make modification more
difficult. Such anti-reverse-engineering features may also
increase the difficulty of vendor maintenance of a product.
Increased difficulty may be a price vendors must pay in order
to have secure software implementations.

   (49) Estimates of the cost to reverse-engineer the Clipper
chip nondestructively cover a wide range, from "doable in
university laboratories with bright graduate students and
traditions of reverse engineering" (as estimated by a number
of electrical engineers in academia with extensive experience
in reverse engineering) to as much as $30 million to $50
million (as estimated in informal conversations between JASON
members and DOD engineers). The cost may well be lower if
large numbers of chips are available for destructive
inspection.

   (50) According to Dorothy Denning, the review team for
Skipjack (see footnote 5 of this chapter) compared the output
from Clipper chips with output from the software version of
Skipjack that the review team obtained for review to verify
that the algorithm on the chips was the same as the software
version. Personal communication, Dorothy Denning, Georgetown
University.

   (51) As described in Chapter 4, the product tester can use
the product to encrypt a randomly chosen set of values with a
randomly chosen key, and compare the cncrypted output to the
known correct result obtained through the use of a product
known to implement the algorithm correctly. This is known as
a vector test.

   (52) Such a comment is not meant to preclude the
possibility of an independent certifying authority, a kind of
"Consumers' Reports" for crypto equipment and products. Such
organizations have been proposed to evaluate and certify
computer security, and as of this writing, three U.S. firm
have received NIST approval to evaluate the conformance of
products to FIPS-140-1, the FIPS for cryptography modules.

   (53) This security problem is referenced in footnote 34,
Chapter 2. The lack of prior vetting for Netscape Navigator is
described by Kathleen Murphy, "A Second Security Breach," *Web
Week*, Volume 1(6), October 1995, p. 8.

____________________________________________________________


            5.11 THE HARDWARE-SOFTWARE CHOICE IN
                   PRODUCT IMPLEMENTATION


   After the Clipper initiative was announced, and as the
debate over escrowed encryption broadened to include the
protection of data communications arld stored data, the mass
market software industry emphasized that a hardware solution
to cryptographic security -- as exemplified by the Clipper
chip -- would not be satisfactory. They argued with some force
that only a software-based approach would encourage the
widespread use of encryption envisioned for the world's
electronic future, making several points:

   +    Customers have a strong preference for using
integrated cryptographic products. While stand-alone products
with encryption capabilities could be made to work, in general
they lack operational convenience for the applications that
software and systems vendors address.

   +    Compared to software, hardware is expensive to
manufacture. In particular, the relevant cost is not simply
the cost of the hardwale encryption device compared to a
software encryption package,(54) but also the cost of any
modifications to the hardware environment needed to accept the
hardware encryption device.(55) For example, one major company
noted to the committee that the adoption of the Fortezza card,
a card that fits into the PC-card slots available on most
laptop computers, would be very expensive in their desktop
computing environment, because most of their desktop computers
do not have a PC-card slot and would have to be modified to
accept a Fortezza card. By contrast, a software encryption
product can simply be loaded via common media (e.g., a CD-ROM
or a floppy disk) or downloaded via a network.

   +    The fact that hardware is difficult to change means
that problems found subsequent to deployment are more
difficult to fix. For example, most users would prefer to
install a software fix by loading a CD-ROM into their
computers than to open up their machines to install a new chip
with a hardware fix.

   +    Hardware-based security products have a history of
being market-unfriendly. Hardware will, in general, be used
only to the extent that the required hardware (and its
specific configuration) is found in user installations.
Moreover, hardware requirements can be specified for software
only when that hardware is widely deployed. For example, a
technical approach to the software piracy problem has been
known for many years; the approach requires the installation
of special-purpose hardware that is available only to those
who obtain the software legitimately. This "solution" has
failed utterly in the marketplace, and software piracy remains
a multibillion-dollar-per-year problem.

   +    Hardware for security consumes physical space and
power in products. For example, a hardware-based encryption
card that fits into an expansion slot on a computer takes up
a slot permanently, unless the user is willing to install and
deinstall the card for every use. It also creates an
additional power demand on electronic devices where power and
battery life are limited.

   In general, products with encryption capabilities today use
software or hardware or both to help ensure security.(56) The
crux of the hardware-software debate is what is good enough to
ensure security. The security needed to manage electronic cash
in the international banking system needs to be much stronger
than the security to protect wordprocessing files created by
private individuals. Thus, software-based cryptography might
work for the latter, while hardware-based cryptography might
be essential for the former.

   Products with encryption capabilities must be capable of
resisting attack. But since such products are often embedded
in operating environments that are themselves insecure, an
attacker may well choose to attack the environment rather than
the product itself. For example, a product with encryption
capabilities may be hardware-based, but the operating
environment may leave the encryption keys or the unencrypted
text exposed.(57) More generally, in an insecure environment,
system security may well not depend very much on whether the
cryptography per se is implemented in hardware or software or
whether it is weak or strong.

   In the context of escrowed encryption, a second security
concern arises -- a user of an escrowed encryption product may
wish to defeat the escrow mechanism built into the product.
Thus, the escrow features of the product must be bound to the
product in a way that cannot be bypassed by some
reverse-engineered modification to the product. This
particular problem is known as binding or, more explicitly,
escrow binding; escrow binding is an essential element of any
escrow scheme that is intended to provide exceptional access.

   Concern over how to solve the escrow binding problem was
the primary motivation for the choice of a hardware approach
to the Clipper initiative. As suggested in Section 5.10, the
functionality of a hardware system designed to resist change
is indeed difficult to change, and so hardware implementations
have undeniable advantages for solving the escrow binding
problem.(58) An EES-compliant device would be a telephone
without software accessible to the user, and would provide
high assurance that the features for exceptional access would
not be bypassed.

   As the debate has progressed, ideas for software-based
escrow processes have been proposed. The primary concern of
the U.S. government about software implementations is that
once a change has been designed and developed that can bypass
the escrow features ("break the escrow binding"), such a
change can be easily propagated through many different
channels and installed with relatively little difficulty. In
the committee's view, the important question is whether
software solutions to the escrow binding problem can provide
an acceptable level of protection against the reverse
engineer. Whether an escrowed encryption product is
implemented in software (or hardware for that matter), the
critical threshold is the difficulty of breaking the escrow
binding (i.e., bypassing the escrowing features) compared to
the effort necessary to set up an independent unescrowed
encryption system (perhaps as part of an integrated product).
If it is more difficult to bypass the escrow features than to
build an unescrowed system, then "rogues" who want to defeat
exceptional access will simply build an unescrowed system. The
bottom line is that an escrowed encryption product does not
have to be perfectly resistant to breaking the escrow binding.

   A possible mitigating factor is that even if a software
"patch" is developed that would break the escrow binding of an
escrowed encryption software product, it may not achieve wide
distribution even among the criminals that would have the most
to gain from such a change. Experience with widely deployed
software products (e.g., operating systems) indicates that
even when a software fix is made available for a problem in a
product, it may not be implemented unless the anomalous or
incorrect software behavior is particularly significant to an
end user. If this is the case for products that are as
critical as operating systems, it may well be true for
products with more specialized applications. On the other side
of the coin, many parties (e.g., criminals) may care a great
deal about the presence of escrowing and thus be highly
motivated to find "fixes" that eliminate escrowing.

----------

   (54) In a recent contract, a vendor agreed to provide
Fortezza cards at $69 per card. See Paul Constance, "After
Complaining $99 Was Too Low, Forteza Vendors Come in at $69,"
*Government Computer News*, October 2, 1995, p. 6.

   (55) One vendor is manufacturing a circuit board for
encryption that fits into a 3.5" floppy disk drive. However,
this device does not employ the Capstone/Fortezza approach.
See Elizabeth Sikorovsky, "Device Offers Alternative to PC
Card-Based Encryption," *Federal Computer Week*, November 13,
1995, pp. 29 and 35.

   (56) Note that the dividing line between hardware and
software is not always clear. In particular, product designers
use the term "firmware" to refer to a design approach that
enters software into a special computer memory (an integrated
circuit chip) that usually is subsequently unchangeable
(read-only memory; ROM). Sometimes an alternate from of memory
is used that does permit changes under controlled conditions
(electrically programmable ROM; EPROM). Such software-
controlled hardware (microprogrammed hardware) has the
convenience that the functionality of the item can be updated
or changed without redesign of the hardware portion.

   (57) Peter G. Neumann, *Can Systems Be Trustworthy with
Software-Implemented Cryptography?*, SRI International, Menlo
Park, California, October 28, 1994.

   (58) A device controlled by software stored in a
programmable read-only memory is for all intents and purposes
the same as "pure hardware" in this context.

____________________________________________________________


       5.12 RESPONSIBILITY FOR GENERATION OF UNIT KEYS


   Key generation is the process by which cryptographic keys
are generated. Two types of keys are relevant:

   +    A session key is required for each encryption of
plaintext into ciphertext; this is true whether the
information is to be stored or communicated. Ultimately, the
intended recipients of this information (those who retrieve it
from storage or those who receive it at the other end of a
communications channel) must have the same session key. For
maximum information security, a new session key is used with
every encryption. (See footnote 7 of this chapter for more
discussion.)

   +    A unit key is a cryptographic key associated with a
particular product or device owned or controlled by a specific
individual. Unit keys are often used to protect session keys
from casual observation in escrowed encryption products, but
precisely how they are used depends on the specifics of a
given product.

   In the most general case, the session key is a random
number, and a different one is generated anew for each
encryption. But the unit key is a cryptographic variable that
typically changes on a much longer time scale than does the
session key. In many escrowed encryption schemes, knowledge of
the unit key enables a third party to obtain the session key
associated with any given encryption.

   The Clipper/Capstone approach requires that the unit key be
generated by the manufacturer at the time of manufacture ("at
birth") and then registered prior to sale with escrow agents
in accordance with established procedures. Such an approach
has one major advantage from the standpoint of those who may
require exceptional access in the future -- it guarantees
registration of keys, because users need not take any action
to ensure registration.

   At the same time, since the Clipper/Capstone approach is
based on a hardware-based implementation that is not
user-modifiable, a given device has only one unit key for its
entire lifetime, although at some cost, the user may change
the Clipper chip embedded in the device.(59) If the unit key
is compromised, the user's only recourse is to change the
chip. A user who does not do so violates one basic principle
of information security -- frequent changing of keys (or
passwords).(60) In addition, the fact that all unit keys are
known at the time of manufacture raises concerns that all keys
could be kept (perhaps surreptitiously) in some master
databank that would be accessible without going to the
designated escrow agents. The implication is that the user is
forced to trust several organizations and individuals involved
with the manufacturing process. Such trust becomes an implicit
aspect of the secrecy associated with EES-compliant devices.

   One alternative to unit key generation at birth is the
generation (or input) of a new unit key at user request. This
approach has the advantage that the user can be confident that
no one else retains a copy of the new key without his or her
knowledge. The disadvantage is that escrow of that key would
require explicit action on the user's part for that purpose.

   An alternative that has some of the advantages of each
approach is to install and register a unit key at birth, but
to design the product to allow the user to change the unit key
later. Thus, all products designed in this manner would have
"default" unit keys installed by the manufacturer and recorded
with some escrow agent; each of these keys would be different.
Users who took the trouble to install a new unit key would
have to take an explicit action to escrow it, but in many
cases, the inconvenience and bother of changing the unit key
would result in no action being taken. Thus, valid unit keys
would be held by escrow agents in two cases -- for products
owned by users who did not change the unit key, and for
products owned by users who chose to register their new keys
with escrow agents.

   Who is responsible for the collection of unit keys? Under
the Clipper/Capstone approach, the responsible party is the
U.S. government. But if nongovemment agencies were to be
responsible for escrowing keys (see Section 5.8), a large
market with many vendors producing many different types of
encryption products in large volume could result in a large
administrative burden on these vendors.

   The specific implementation of EES also raises an
additional point. As proposed, EES requires that unit keys be
given to government authorities upon presentation of legal
authorization. If these keys are still available to the
authorities after the period of legal authorization has
expired, the EES device is forever open to government
surveillance. To guard against this possibility,
Administration plans for the final Clipper key escrow system
provide for automatic key deletion from the decrypting
equipment upon expiration of the authorized period. Key
deletion is to be implemented on the tamper-resistant device
that law enforcement authorities will use to decrypt
Clipper-encrypted traffic. However, by early 1996, the
deployed interim key escrow system had not been upgraded to
include that feature.

----------

   (59) A Clipper chip costs about $10 when bought in large
lots. (Personal communication, March 22, 1996, Jimmy Dolphin,
Mykotronx.) Even when including retail mark-up costs and
labor, the cost of changing a Clipper chip is likely to be
less than $100.

   (60) However, since the Skipjack algorithm is classified,
simple knowledge of the unit key (or the session key) would
enable only those with knowledge of the algorithm to decrypt
the session key (or the session).

____________________________________________________________


     5.13 ISSUES RELATED TO THE ADMINISTRATION PROPOSAL
      TO EXEMPT 64-BIT ESCROWED ENCRYPTION IN SOFTWARE


   As noted in Chapter 4, the Administration has proposed to
treat software products with 64-bit encryption using any
algorithm as it currently treats products that are based on
40-bit RC2/RC4 algorithms, providing that products using this
stronger encryption are "properly escrowed." This change is
intended to make available to foreign customers of U.S.
software products stronger cryptographic protection than they
currently have today.

   This proposal has raised several issues.


         5.13.1 The Definition of "Proper Escrowing"

   The definition of "proper escrowing" (as the phrase is used
in the Administration's proposed new export rules in Box 5.3)
is that keys should be escrowed only with "escrow agent(s)
certified by the U.S. Government, or certified by foreign
governments with which the U.S. Government has formal
agreements consistent with U.S. law enforcement and national
security requirements." These agents would not necessarily be
govermnent agencies, although in principle they could be.

   The obvious question is whether foreign consumers will be
willing to purchase U.S. products with encryption capabilities
when it is openly announced that the information security of
those products could be compromised by or with the assistance
of escrow agents certified by the U.S. government. While the
draft definition does envision the possibility that escrow
agents could be certified by foreign governments (e.g., those
in the country of sale), formal agreements often take a long
time to negotiate, during which time U.S. escrow agents would
hold the keys, or the market for such products would fail to
develop.

   For some applications (e.g., U.S. companies doing business
with foreign suppliers), interim U.S. control of escrow agents
may prove acceptable. But it is easy to imagine other
applications for which it would not, and in any case a larger
question is begged: What would be the incentive for foreign
users to purchase such products from U.S. vendors if
comparably strong but unescrowed foreign products with
encryption capabilities were available? As the discussion in
Chapter 2 points out, integrated products with encryption
capabilities are generally available today from U.S. vendors.
However, how long the U.S. monopoly in this market will last
is an open question.

   The issue of who holds the keys in an international context
is explored further in Appendix G.


              5.13.2 The Proposed Limitation of
               Key Lengths to 64 Bits or Less

   The most important question raised by the 64-bit limitation
is this: If the keys are escrowed and available to law
enforcement and national security authorities, why does it
matter how long the keys are? In response to this question,
senior Administration officials have said that the limitation
to 64 bits is a way of hedging against the possibility of
finding easily proliferated ways to break the escrow binding
built into software, with the result that U.S. software
products without effective key escrow would become available
worldwide. Paraphrasing the remarks of a senior Administration
official at the 1995 International Cryptography Institute,
"The 64-bit limit is there because we might have a chance of
dealing with a breakdown of software key escrow 10 to 15 years
down the line; but if the key length implied a work factor of
something like triple-DES, we would *never* [emphasis in
original] be able to do it."

   Two factors must be considered in this argument. One is the
likelihood that software key escrow can in fact be
compromised. This subject is considered in Sections 5.10.2 and
5.11. But a second point is the fact that the 64-bit limit is
easily circumvented by multiple encryption under some
circumstances. Specifically, consider a stand-alone
security-specific product for file encryption that is based on
DES and is escrowed. Such a product -- in its unaltered state
-- meets all of the proposed draft criteria for export. But
disassembly of the object code of the program (to defeat the
escrow binding) may also reveal the code for DES encryption in
the product. Once the source code for the DES encryption is
available, it is a technically straightforward exercise to
implement a package that will use the product to implement a
triple-DES encryption on a file.


                         5.14 RECAP


   Escrowed encryption is one of several approaches to
providing exceptional access to encrypted information. The
U.S. government has advanced a number of initiatives to
support the insertion of escrow features into products with
encryption capabilities that will become available in the
future, including the Escrowed Encryption Standard, the
Capstone/Fortezza initiative, and a proposal to liberalize
export controls on products using escrowed encryption. Its
support of escrowed encryption embodies the government's
belief that the benefit to law enforcement and national
security from exceptional access to encrypted information
outweighs the damage owing to loss of confidentiality that
might occur with the failure of procedures intended to prevent
unauthorized access to the escrow mechanism.

   Escrowed encryption provides *more* confidentiality than
leaving information unprotected (as most information is
today), but less confidentiality than what could be provided
by good implementations of unescrowed cryptography. On the
other hand, escrowed encryption provides more capability for
exceptional access under circumstances of key loss or
unavailability than does unescrowed encryption. All users will
have to address this trade-off between level of
confidentiality and key unavailability.

   The central questions with respect to escrowed encryption
are the following:

   +    With what degree of confidence is it possible to
ensure that third parties will have access to encrypted
information only under lawfully authorized circumstances?

   +    What is the trade-off for the user between potentially
lower levels of confidentiality and higher degrees of
confidence that encrypted data will be available when
necessary?

____________________________________________________________

             BOX 5.1 Key Technical Attributes of
                   the Clipper Initiative

   1. A chip-unique secret key, the "unit key" or "device key"
or "master key" would be embedded in the chip at the time of
fabrication and could be obtained by law enforcement officials
legally authorized to do so under Title III.

   2. Each chip-unique device key would be split into two
components.

   3. The component parts would be deposited with and held
under high security by two trusted thirdparty escrow agents
proposed to be agencies of the U.S. government. Note:
"Third-party" is used here to indicate parties other than
those participating in the communication.

   4. A law enforcement access field (LEAF) would be a
required part of every transmission. The LEAF would contain
(a) the current session key, encrypted with a combination of
the device-unique master key and a different but secret
"family key" also permanently embedded in the chip and (b) the
chip serial number, also protected by encryption with the
family key.

   5. Law enforcement could use the information in the LEAF to
identify the particular device of interest, solicit its
master-key components from the two escrow agents, combine
them, recover the session key, and eventually decrypt the
encrypted traffic.

   6. The encryption algorithm on the chip would be secret.

   7. The chip would be protected against reverse engineering
and other attempts to access its technical details.

----------

SOURCE: Dorothy Denning and Miles Smid, "Key Escrowing Today,"
*IEEE Communications*, Volume 32(9), September 1994, pp.
58-68. Available on-line from http://www.cosc.georgetown.edu/~
denning/ crypto/Key-Escrowing-Today.txt.

____________________________________________________________

                BOX 5.2 The Relationship of
                Escrowed Encryption to SIGINT

   Escrowed encryption -- especially the EES and the Clipper
initiative -- is a tool of law enforcement more than of
signals intelligence. The EES was intended primarily for
domestic use, although exports of EES-compliant devices have
not been particularly discouraged. Given that the exceptional
access feature of escrowed encryption has been openly
announced, purchase by foreign governments for secure
communications is highly unlikely.

   On the other hand, the U.S. government has classified the
Skipjack algorithm to keep foreign adversaries from learning
more about good cryptography. In addition, wide deployment and
use of escrowed encryption would complicate the task of
signals intelligence, simply because individual keys would
have to be obtained one by one for communications that might
or might not be useful. (Still, EES devices would be better
for SIGINT than unescrowed secure telephones, in the sense
that widely deployed secure telephones without features for
exceptional access would be much harder to penetrate.)

   Finally, the impact of escrowed encryption on intelligence
collection abroad depends on the specific terms of escrow
agent certification. Even assuming that all relevant escrow
agents are located within the United States (a question
addressed at greater length in Appendix G), the specific
regulations governing their behavior are relevant.
Intelligence collections of digital data can proceed with few
difficulties if regulations permit escrow agents to make keys
available to national security authorities on an automated
basis and without the need to request keys one by one. On the
other hand, if the regulations forbid wholesale access to keys
(and the products in question do not include a "universal key"
that allows one key to decrypt messages produced by many
devices), escrowed encryption would provide access primarily
to specific encrypted communications that are known to be
intrinsically interesting (e.g., known to be from a particular
party of interest). However, escrowed encryption without
wholesale access to keys would not provide significant
assistance to intelligence collections undertaken on a large
scale.

____________________________________________________________

           BOX 5.3 Administration's Draft Software
         Key Escrow Export Criteria -- November 1995

                     Key Escrow Feature

   1. The key(s) required to decrypt the product's key escrow
cryptographic functions' ciphertext shall be accessible
through a key escrow feature.

   2. The product's key escrow cryptographic functions shall
be inoperable until the key(s) is escrowed in accordance with
#3.

   3. The product's key escrow cryptographic functions' key(s)
shall be escrowed with escrow agent(s) certified by the U.S.
Government, or certified by foreign governments with which the
U.S. Government has formal agreements consistent with U.S. law
enforcement and national security requirements.

   4. The product's key escrow cryptographic functions'
ciphertext shall contain, in an accessible format and with a
reasonable frequency, the identity of the key escrow agent(s)
and information sufficient for the escrow agent(s) to identify
the key(s) required to decrypt the ciphertext.

   5. The product's key escrow feature shall allow access to
the key(s) needed to decrypt the product's ciphertext
regardless of whether the product generated or received the
ciphertext.

   6. The product's key escrow feature shall allow for the
recovery of multiple decryption keys during the period of
authorized access without requiring repeated presentations of
the access authorization to the key escrow agent(s).


                     Key Length Feature

   7. The product's key escrow cryptographic functions shall
use an unclassified encryption algorithm with a key length not
to exceed sixty-four (64) bits.

   8. The product's key escrow cryptographic functions shall
not provide the feature of multiple encryption (e.g.,
triple-DES).


                  Interoperability Feature

   9. The product's key escrow cryptographic functions shall
interoperate only with key escrow cryptographic functions in
products that meet these criteria, and shall not interoperate
with the cryptographic functions of a product whose key escrow
encryption function has been altered, bypassed, disabled, or
otherwise rendered inoperative.


      Design, Implementation, and Operational Assurance

   10. The product shall be resistant to anything that could
disable or circumvent the attributes described in #1 through
#9.

----------

SOURCE: National Institute of Standards and Technology, *Draft
Software Key Escrow Encryption Export Criteria*, November 6,
1995. Available on line from http://csrc.ncsl.nist.gov/
keyescrow/criteria.txt (11/95 version; NIST Web page).

____________________________________________________________

    BOX 5.4 Non-Clipper Proposals for Escrowed Encryption

   AT&T CryptoBackup. CryptoBackup is an AT&T proprietary
design for a commercial or private key-escrow encryption
system. The data encryption key for a document is recovered
through a backup recovery vector (BRV), which is stored in the
document header. The BRV contains the document key encrypted
under a master public key of the escrowed agent(s). (David P.
Maher, "Crypto Backup and Key Escrow," *Communications of the
ACM*, March 1996.)

   Bankers Trust Secure Key Escrow Encryption System
(SecureKEES). Employees of a corporation register their
encryption devices (e.g., smart card) and private encryption
keys with one or more commercial escrow agents selected by the
corporation. (SecureKEES product literature, CertCo, Bankers
Trust Company.)

   Bell Atlantic Yaksha System. An on-line key security server
generates and distributes session keys and file keys using a
variant of the RSA algorithm. The server transmits the keys to
authorized parties for data recovery purposes. (Ravi Ganesan,
"The Yaksha Security System," *Communications of the ACM*,
March 1996.)

   Royal Holloway Trusted Third Party Services. This proposed
architecture for a public key infrastructure requires that the
trusted third parties associated with pairs of communicating
users share parameters and a secret key. (Nigel Jefferies,
Chris Mitchell, and Michael Walker, *A Proposed Architecture
for Trusted Third Party Services*, Royal Holloway, University
of London, 1995.)

   RSA Secure(TM). This file encryption product provides data
recovery through an escrowed master public key, which can be
split among up to 255 trustees using a threshold scheme. (RSA
Secure(TM), product literature from RSA Data Security Inc.)

   Nortel Entrust. This commercial product archives users'
private encryption keys as part of the certificate authority
function and public-key infrastructure support. (Warwick Ford,
"Entrust Technical Overview," White Paper, Nortel Secure
Networks, October 1994.)

   National Semiconductor CAKE. This proposal combines a TIS
Commercial Key Escrow (CKE) with National Semiconductor's
PersonaCard(TM). (W.B. Sweet, "Commercial Automated Key Escrow
(CAKE): An Exportable Strong Encryption Proposal," National
Semiconductor, iPower Business Unit. June 4. 1995.)

   TIS Commercial Key Escrow (CKE). This is a commercial key
escrow system for stored data and file transfers. Data
recovery is enabled through master keys held by a Data
Recovery Center. (Stephen T. Walker, Stephen B. Lipner, Carl
M. Ellison, and David M. Balenson, "Commercial Key Recovery,"
*Communications of the ACM*, March 1996.)

   TECSEC VEIL(TM). This commercial product provides file (and
object) encryption. Private key escrow is built into the key
management infrastructure. (Edward M. Scheidt and Jon L.
Roberts, "Private Escrow Key Management," TECSEC Inc., Vienna,
Va. See also TECSEC VEIL(TM), product literature.)

   Viacrypt PGP/BE (Business Edition). Viacrypt is a
commercialized version of PGP, the free Internet-downloadable
software package for encrypted communications. The Business
Edition of Viacrypt optionally enables an employer to decrypt
all encrypted files or messages sent or received by an
employee by carrying the session key encrypted under a
"Corporate Access Key" in the header for the file or message.
(See http://www.viacrypt.com.)

----------

SOURCE: Most of these examples are taken from Dorothy Denning
and Miles Smid, "Key Escrowing Today," *IEEE Communications*,
Volume 32, 1994, pp. 58-68. Available on line from
http://www.cosc.georgetown.edu/~denning/crypto/Key-Escrowing-Today.txt.

____________________________________________________________

          BOX 5.5 Law Enforcement Requirements for
                Escrowed Encryption Products


                 Information Identification

   +    The product is unable to encrypt/decrypt data unless
the necessary information to allow law enforcement to decrypt
communications and stored information is available for release
to law enforcement.

   +    A field is provided that readily identifies the
information needed to decrypt each message, session, or file
generated or received by the user of the product.

   +    Repeated involvement by key escrow agents [KEAs] is
not required to obtain the information needed to decrypt
multiple conversations and data messages (refer to expeditious
information release by KEA's) during a period of authorized
communications interception.


           Provision of Subject's Information Only

   +    Only information pertaining to the communications or
stored information generated by or for the subject is needed
for law enforcement decryption.


            Subversions of Decryption Capability

   +    The product is resistant against alterations that
disable or bypass law enforcement decryption capabilities.

   +    Any alteration to the product to disable or bypass law
enforcement's decryption capability requires a significant
level of effort regardless of whether similar alterations have
been made to any other identical version of that product.


                        Transparency

   +    The decryption of an intercepted communication is
transparent to the intercept subject and all other parties to
the communication except the investigative agency and the key
escrow agent.


               Access to Technical Details to
                 Develop Decrypt Capability

   +    Law enforcement may need access to a product's
technical details to develop a key escrow decrypt capability
for that product.

----------

SOURCE: Federal Bureau of Investigation, viewgraphs of
presentation to *International Cryptography Institute 1995*,
September 22, 1995.

____________________________________________________________

   BOX 5.6 Law Enforcement Requirements for Escrow Agents


                  Information Availability

   +    The information necessary to allow law enforcement the
ability to decrypt communications and stored information is
available. KEAs [key escrow agents] should maintain or be
capable of generating all the necessary decrypt (key)
information.

   +    Key and/or related information needed to decrypt
communications and stored information is retained for extended
time periods. KEAs should be able to decrypt information
encrypted with a device or product's current and/or former
key(s) for a time period that may vary depending on the
application (e.g., voice vs. stored files).

   +    A backup capability exists for key and other
information needed to decrypt communications and stored
information. Thus, a physically separate backup capability
should be available to provide redundancy of resources should
the primary capability fail.


            Key Escrow Agent (KEA) Accessibility

   +    KEAs should be readily accessible. For domestic
products, they should reside and operate in the United States.
They should be able to process proper requests at any time;
most requests will be submitted during normal business hours,
but exigent circumstances (e.g., kidnappings, terrorist
threats) may require submission of requests during nonbusiness
hours.


                 Information Release by KEAs

   +    The information needed for decryption is expeditiously
released upon receipt of a proper request. Since
communications intercepts require the ability to decrypt
multiple conversations and data messages sent to or from the
subject (i.e., access to each session or message key) during
the entire intercept period, only one initial affirmative
action should be needed to obtain the relevant information.
Exigent circumstances (e.g. kidnappings, terrorist threats)
will require the release of decrypt information within a
matter of hours.


       Confidentiality and Safeguarding of Information

   +    KEAs should safeguard and maintain the confidentiality
of information pertaining to the request for and the release
of decrypt information. KEAs should protect the
confidentiality of the person or persons for whom a key escrow
agent holds keys or components thereof, and protect the
confidentiality of the identity of the agency requesting
decrypt information or components thereof and all information
concerning such agency's access to and use of encryption keys
or components thereof.

   For law enforcement requests, KEA personnel knowledgeable
of an interception or decryption should be of good character
and have not been convicted of crimes of moral turpitude or
otherwise bearing on their trustworthiness. For national
security requests, KEA personnel viewing and or storing
classified requests must meet the applicable U.S. Government
requirements for accessing and or storing classified
information. Efforts are ongoing to examine unclassified
alternatives.

   +    KEAs should be legitimate organizations without ties
to criminal enterprises, and licensed to conduct business in
the United States. KEAs for domestic products should not be a
foreign corporation, a foreign country, or an entity thereof.

----------

SOURCE: Federal Bureau of Investigation, viewgraphs of
presentation to *International Cryptography Institute 1995*,
September 22, 1995.

____________________________________________________________

      BOX 5.7 Proposed U.S. Government Requirements for
        Ensuring Escrow Agent Integrity and Security

   1. Escrow agent entities shall devise and institutionalize
policies, procedures, and mechanisms to ensure the
confidentiality, integrity, and availability of key escrow
related information.

   a. Escrow agent entities shall be designed and operated so
   that a failure by a single person, procedure, or mechanism
   does not compromise the confidentiality, integrity or
   availability of the key and/or key components (e.g., two
   person control of keys, split keys, etc.)

   b. Unencrypted escrowed key and or key components that are
   stored and or transmitted electronically shall be protected
   (e.g., via encryption) using approved means.

   c. Unencrypted escrowed key and or key components stored
   and/or transferred via other media/methods shall be
   protected using approved means (e.g., safes).

   2. Escrow agent entities shall ensure due form of escrowed
key access requests and authenticate the requests for escrowed
key and or key components.

   3. Escrow agent entities shall protect against disclosure
of information regarding the identity of the person/
organization whose key and/or key components is requested, and
the fact that a key and/or key component was requested or
provided.

   4. Escrow agent entities shall enter keys/key components
into the escrowed key database immediately upon receipt.

   5. Escrow agent entities shall ensure at least two copies
of any key and/or key component in independent locations to
help ensure the availability of such key and or key components
due to unforeseen circumstances.

   6. Escrow agent entities that are certified by the U.S.
government shall work with developers of key escrow encryption
products and support a feature that allows products to verify
to one another that the products' keys have been escrowed with
a U.S.-certified agent.

-----------

SOURCE: National Institute of Standards and Technology, *Draft
Software Key Escrow Encryption Export Criteria*, November 6,
1995. Available on line from http://csrc.ncsl.nist.gov/
keyescrow/criteria.txt (11/95 version; NIST Web page).

____________________________________________________________

        BOX 5.8 Requirements for Ensuring Key Access

   7. An escrow agent entity shall employ one or more persons
who possess a SECRET clearance for purposes of processing
classified (e.g., FISA) requests to obtain keys and/or key
components.

   8. Escrow agent entities shall protect against unauthorized
disclosure of information regarding the identity of the
organization requesting the key or key components.

   9. Escrow agent entities shall maintain data regarding all
key escrow requests received, key escrow components released,
database changes, system administration accesses, and dates of
such events, for purposes of audit by appropriate government
officials or others.

   10. Escrow agent entities shall maintain escrowed keys
and/or key components for as long as such keys may be required
to decrypt information relevant to a law enforcement
investigation.

   11. Escrow agent entities shall provide key/key components
to authenticated requests in a timely fashion and shall
maintain a capability to respond more rapidly to emergency
requirements for access.

   12. Escrow agent entities shall possess and maintain a
Certificate of Good Standing from the State of incorporation
(or similar local/national authority).

   13. Escrow agent entities shall provide to the U.S.
government a Dun & Bradstreet/TRW number or similar credit
report pointer and authorization.

   14. Escrow agent entities shall possess and maintain an
Errors & Omissions insurance policy.

   15. Escrow agent entities shall provide to the U.S.
government a written copy of, or a certification of the
existence of a corporate security policy governing the key
escrow agent entity's operation.

   16. Escrow agent entities shall provide to the U.S.
government a certification that the escrow agent will comply
with all applicable federal, state, and local laws concerning
the provisions of escrow agent entity services.

   17. Escrow agent entities shall provide to the U.S.
government a certification that the escrow agent entity will
transfer to another approved escrow agent the escrow agent
entity's equipment and data in the event of any dissolution or
other cessation of escrow agent entity operations.

   18. Escrow agent entities for products sold in the United
States shall not be a foreign country or entity thereof, a
national of a foreign country, or a corporation of which an
alien is an officer or more than one-fourth of the stock which
is owned by aliens or which is directly or indirectly
controlled by such a corporation. Foreign escrow agent
entities for products exported from the United States will be
approved on a case by case basis as law enforcement and
national security agreements can be negotiated.

   19. Escrow agent entities shall provide to the U.S.
government a certification that the escrow agent entity will
notify the U.S. government in writing of any changes in the
forgoing information.

   20. Fulfillment of these and the other criteria are subject
to periodic recertification.

----------

SOURCE: National Institute of Standards and Technology, *Draft
Key Escrow Agent Criteria*, December 1, 1995. Available on
line from http://csrc.ncsl.nist.gov/keyescrow/agent-criteria.
txt.

____________________________________________________________

         BOX 5.9 Statutory Limitations on Liability

   Government can promote the use of specific services and
products by assuming some of the civil liability risks
associated with them. Three examples follow:

   +    The Atomic Energy Damages Act, also called the
Price-Anderson Act, limits the liability of nuclear power
plant operators for harm caused by a nuclear incident (such as
an explosion or radioactive release). To operate a nuclear
power plant, a licensee must show the U.S. Nuclear Regulatory
Commission (U.S. NRC) that it maintains financial protection
(such as private insurance, self-insurance, or other proof of
financial responsibility) equal to the maximum amount of
insurance available at reasonable cost and reasonable terms
from private sources, unless the U.S. NRC sets a lower
requirement on a case-specific basis. The U.S. NRC indemnifies
licensees from all legal liability arising from a nuclear
incident, including a precautionary evacuation, which is in
excess of the required financial protection, up to a maximum
combined licensee-and-government liability of $560 million.
Incidents that cause more than $560 million in damage will
trigger review by the Congress to determine the best means to
compensate the public, including appropriating funds.

   +    The Commercial Space Launch Act provides similar
protection to parties licensed to launch space vehicles or
operate launch sites, but with a limit on the total liability
the United States accepts. The licensee must obtain financial
protection sufficient to compensate the maximum probable loss
that third parties could claim for harm or damage, as
determined by the secretary of transportation. The most that
can be required is $500 million or the maximum liability
insurance available from private sources, whichever is lower.
The United States is obligated to pay successful claims by
third parties in excess of the required protection, up to $1.5
billion, unless the loss is related to the licensee's willful
misconduct. The law also requires licensees to enter into
reciprocal waivers of claims with their contractors and
customers, under which each party agrees to be responsible for
losses it sustains.

   +    The swine flu vaccination program of 1976 provides an
example in which the United States accepted open-ended
liability and paid much more than expected. Doctors predicted
a swine flu epidemic, and Congress appropriated money for the
Department of Health, Education, and Welfare (HEW) to pay four
pharmaceutical manufacturers for vaccines to be distributed
nationwide. The manufacturers' inability to obtain liability
insurance delayed the program until Congress passed
legislation (P.L. 94-380) in which the United States assumed
all liability other than manufacturer negligence. The
government's liability could thus include, for example,
harmful side effects. Claims against the United States would
be processed under the Federal Tort Claims Act (which provides
for trial by judge rather than jury and no punitive damages,
among other distinctions). Some of the 45 million people who
were immunized developed complications, such as Guillain-Barre
syndrome; consequently, the program was canceled. By September
1977. 815 claims had been filed. The United States ultimately
paid more than $100 million to settle claims, and some
litigation is still pending today. Manufacturers, who by law
were liable only for negligence, were not sued.

____________________________________________________________

    BOX 5.10 Perspectives on Secrecy and System Security

   The distinction between the general system (i.e., a
product) and the specific key (of an encrypted message) was
first articulated by Auguste Kerckhoffs in his historic book
*La Cryptographie Militaire*, published in 1883. Quoting David
Kahn in *The Codebreakers*:

   Kerckhoffs deduced [that] ... compromise of the system
   should not inconvenience the correspondents.... Perhaps the
   most startling requirement, at first glance, was the second
   .... Kerckhoffs explained that by "system" he meant "the
   material part of the system; tableaux, code books, or
   whatever mechanical apparatus may be necessary," and not
   "the key proper." Kerckhoffs here makes for the first time
   the distinction, now basic to cryptology, between the
   general system and the specific key. Why must the general
   system "not require secrecy"? ... Because Kerckhoffs said,
   "it is not necessary to conjure up imaginary phantoms and
   to suspect the incorruptibility of employees or subalterns
   to understand that, if a system requiring secrecy were in
   the hands of too large a number of individuals, it could be
   compromised at each engagement.... This has proved to be
   true, and Kerckhoffs' second requirement has become widely
   accepted under a form that is sometimes called the
   fundamental assumption of military cryptography: that the
   enemy knows the general system. But he must still be unable
   to solve messages in it without knowing the specific key.
   In its modern formulation, the Kerckhoffs doctrine states
   that secrecy must reside solely in the keys."(1)

   A more modern expression of this sentiment is provided by
Dorothy Denning:

   The security of a cryptosystem should depend only on the
   secrecy of the keys and not on the secrecy of the
   algorithms.... This requirement implies the algorithms must
   be inherently strong; that is, it should not be possible to
   break a cipher simply by knowing the method of
   encipherment. This requirement is needed because the
   algorithms may be in the public domain, or known to a
   cryptanalyst.(2)

----------

   (1)  David Kahn, *The Codebreakers*, Macmillan, New York,
1967, p. 235.

   (2)  Dorothy Denning, *Cryptography and Data Security*,
Addison-Wesley, Reading, Mass., 1982, p. 8.

____________________________________________________________

[End chapter 5]








[Head note all pages: May 30, 1996, Prepublication Copy
Subject to Further Editorial Correction]


                              6

      Other Dimensions of National Cryptography Policy


   In addition to export controls and escrowed encryption,
current national policy on cryptography is affected by
government use of a large number of levers available to it,
including the Communications Assistance for Law Enforcement
Act, the standards-setting process, R&D funding, procurement
practices, education and public jawboning, licenses and
certification, and arrangements both formal and informal with
various other governments (state, local, and foreign) and
organizations (e.g., specific private companies). All of these
are controversial because they embody judgments about how the
interests of law enforcement and national security should be
reconciled against the needs of the private sector. In
addition, the international dimensions of cryptography are
both critical (because cryptography affects communications and
communications are fundamentally international) and enormously
difficult (because national interests differ from government
to government).


              6.1 THE COMMUNICATIONS ASSISTANCE
                   FOR LAW ENFORCEMENT ACT


   The Communications Assistance for Law Enforcement Act
(CALEA) was widely known as the "digital telephony" bill
before its formal passage. The CALEA is not explicitly
connected to national cryptography policy, but it is an
important aspect of the political context in which national
cryptography policy has been discussed and debated.


                 6.1.1 Brief Description of
             and Stated Rationale for the CALEA


General Description

   The Communications Assistance for Law Enforcement Act
(CALEA) was passed in October 1994. The act imposes on
telecommunications carriers four requirements in connection
with those services or facilities that allow customers to
originate, terminate, or direct communications:

   +    To expeditiously isolate and enable the government to
intercept, pursuant to court order or other lawful
authorization, all wire and electronic communications in the
carrier's control to or from the equipment, facilities, or
services of a subscriber, in real time or at any later time
acceptable to the government. Carriers are not responsible for
decrypting encrypted communications that are the subject of
court-ordered wiretaps, unless the carrier provided the
encryption and can decrypt it. Moreover, carriers are not
prohibited from deploying an encryption service for which it
does not retain the ability to decrypt communications for law
enforcement access.

   +    To expeditiously isolate and enable the government to
access, pursuant to court order or other lawful authorization,
reasonably available call-identifying information about the
origin and destination of communications. Access must be
provided in such a manner that the information may be
associated with the communication to which it pertains and is
provided to the government before, during, or immediately
after the communication's transmission to or from the
subscriber.

   +    To make intercepted communications and
call-identifying information available to government, pursuant
to court order or other lawful authorization, so that they may
be transmitted over lines or facilities leased or procured by
law enforcement to a location away from the carrier's
premises.

   +    To meet these requirements with a minimum of
interference with the subscriber's service and in such a way
that protects the privacy of communications and
call-identifying information that are not targeted by
electronic surveillance orders, and that maintains the
confidentiality of the government's interceptions.

   The CALEA also authorizes federal money for retrofitting
common carrier systems to comply with these requirements. As
this report is being written, no money has yet been
appropriated for this task.

   The CALEA requirements apply only to those services or
facilities that enable a subscriber to make, receive, or
direct calls. They do not apply to information services, such
as the services of electronic mail providers; on-line services
such as Compuserve or America Online; or Internet access
providers; or to private networks or services whose sole
purpose is to interconnect carriers. Furthermore, the CALEA
requires law enforcement authorities to use carrier employees
or personnel to activate a surveillance. The CALEA also
provides that a warrant is needed to tap a cordless telephone;
wiretaps on cellular telephones are already governed by Title
III or the Foreign Intelligence Surveillance Act.


The Stated Rationale for the CALEA

   Historically, telecommunications service providers have
cooperated with law enforcement officials in allowing access
to communications upon legal authorization. New
telecommunications services (e.g., call forwarding, paging,
cellular calls) and others expected in the future have
diminished the ability of law enforcement agencies to carry
out legally authorized electronic surveillance. The primary
impact of the CALEA is to ensure that within 4 years,
telecommunications service providers will still be able to
provide the assistance necessary to law enforcement officials
to conduct surveillance of wire and electronic communications
(both content and call-identifying information) controlled by
the carrier, regardless of the nature of the particular
services being offered.


      6.1.2 Reducing Resource Requirements for Wiretaps

   Once a surveillance order has been approved judicially, it
must be implemented. In practice, the implementation of a
surveillance order requires the presence of at least two
agents around the clock. Such a presence is required if
real-time minimization requirements are to be met.(1) As a
result, personnel requirements are the most expensive aspect
of electronic surveillance. The average cost of a wiretap
order is $57,000 (Appendix D), or approximately one-half of a
full-time-equivalent agent-year. Such costs are not incurred
lightly by law enforcement agencies.

   Under these circumstances, procedures and/or technologies
that could reduce the labor required to conduct wiretaps pose
a potential problem for individuals concerned about excessive
use of wiretaps. Specifically, these individuals are concerned
that the ability to route wiretapped calls to a central
location would enable a single team of agents to monitor
multiple conversations.(2) Such time sharing among monitoring
teams could lower wiretap costs significantly. From the
standpoint of law enforcement, these savings would could be
used for other law enforcement purposes, and they would have
the additional effect of eliminating an operational constraint
on the frequency with which wiretap authority is sought today.

   Technologies that would enable minimization without human
assistance are in their infancy today. For example, the
technology of speech recognition for the most part cannot cope
with speech that is speaker-independent and continuous, and
artificial intelligence programs today and for the foreseeable
future will be unable to distinguish between the criminally
relevant and nonrelevant parts of a conversation. Human agents
are an essential component of a wiretap, and law enforcement
officials have made three key points in response to the
concern raised above.

   +    Most importantly, today's wiretaps are performed
generally with law enforcement agencies paying
telecommunications service providers for delivering the
intercepted communications to a point of law enforcement's
choosing.

   +    From an operational standpoint, the real-time
minimization of wiretapped conversations requires agents that
are personally familiar with the details of the case under
investigation, so that they know when the subjects are engaged
in conversations related to the case -- agents exceed their
authority if they monitor unrelated conversations.

   +    Procedural rules require that all evidence be
maintained through a proper chain of custody and in a manner
such that the authenticity of evidence can be established. Law
enforcement officials believe that the use of one team to
monitor different conversations could call into question the
ability to establish a clear chain of custody.

----------

   (1)  Minimization refers to the practice, required by Title
III, of monitoring only those portions of a conversation that
are relevant to the crime under investigation. If a subject
discusses matters that are strictly personal, such discussions
are not subject to monitoring. In practice, a team of agents
operate a tape recorder on the wiretapped line. Minimization
requires agents to turn off the tape recorder and to cease
monitoring the conversation for a short period of time if they
overhear nonrelevant discussions. At the end of that time
period, they are permitted to resume monitoring. For obvious
reasons, this practice is conducted in real time. When agents
encounter a foreign language with which they are unfamiliar,
they are allowed to record the entire conversation; the tape
is then "minimized" after the fact of wiretapping. Additional
discussion of the requirements imposed on wiretapping by Title
III are contained in Appendix D.

   (2)  For example, such a concern was raised at the Fifth
Conference on Computers, Freedom, and Privacy held in San
Francisco in March 1995. The argument goes as follows. While
the CALEA authorizes $500 million to pay for existing in-place
telephone switch conversions to implement the capabilities
desired by law enforcement, this amount is intended as a
one-time cost; upgrades of switching systems are expected to
implement these capabilities without government subsidy.
(Moreover, the Congress has not yet appropriated this money.)
The point is that additional wiretap orders would not pose an
additional incremental cost (though the original cost of
$57,000 would still obtain), and the barrier of incremental
cost would not impede more wiretap orders. In short, critics
argue that it would make good economic sense to make
additional use of resources if such use can "piggy-back" on an
already-made investment.

____________________________________________________________


                 6.1.3 Obtaining Access to
                Digital Streams in the Future

   In the conduct of any wiretap, the first technical problem
is simply gaining access to the relevant traffic itself,
whether encrypted or not. For law enforcement, products with
encryption capabilities and features that allow exceptional
access are useless without access to the traffic in question.
The CALEA was an initiative spearheaded by law enforcement to
deal with the access problem created by new telecommunications
services.

   The problems addressed by the CALEA will inevitably
resurface as newer communications services are developed and
deployed for use by common carriers and private entities
(e.g., corporations) alike. It is axiomatic that the
complexity of interactions among communications systems will
continually increase, both as a result of increased
functionality and the need to make more efficient use of
available bandwidth. Consequently, isolation of the digital
streams associated with the party or parties targeted by law
enforcement will become increasingly difficult if the
cooperation of the service provider is not forthcoming, for
all of the reasons described in Chapter 2. (It is for this
reason that the CALEA applies to parties that are not common
carriers today upon appropriate designation by the Federal
Communications Commission.)

   Moreover, even when access to the digital stream of an
application is assured, the structure of the digital stream
may be so complex that it would be extremely costly to
determine all of the information present without the
assistance of the application developer. Tools designed to
isolate the relevant portions of a given digital stream
transmitted on open systems will generally be less expensive
than tools for proprietary systems, but since both open and
proprietary systems will be present in any future
telecommunications environment, law enforcement authorities
will need tools for both. The development of such tools will
require considerable technical skill, skill that is most
likely possessed by the application developers; cooperation
with product developers may decrease the cost of developing
these tools.

   Finally, as the telecommunications system becomes more and
more heterogeneous, even the term "common carrier" will become
harder to define or apply. The routing of an individual data
communication through the "network" will be dynamic and may
take any one of a number of paths, decisions about which are
not under the user's control. While only one link in a given
route need be a common carrier for CALEA purposes, identifying
that common carrier in practice may be quite difficult.


          6.1.4 The CALEA Exemption of Information
             Service Providers and Distinctions
               Between Voice and Data Services

   At present, users of data communications services access
networks such as the Internet either through private networks
(e.g., via their employers) or through Internet service
providers that provide connections for a variety of
individuals and organizations. Both typically make use of
lines owned and operated by telecommunications service
providers. In the former case, law enforcement access to the
digital stream is more or less the same problem as it is for
the employer (and law enforcement has access through the legal
process to the employer). In the latter case, the CALEA
requires the telephone service provider to provide to law
enforcement authorities a copy of the digital stream being
transported.

   The CALEA exempts on-line information service providers
such as America Online and Compuserve from its requirements.
In the future, other CALEA issues may arise as the
capabilities provided by advanced information technologies
grow more sophisticated. For example, the technological
capability exists to use Internet-based services to supply
real-time voice communications.(3) Even today, a number of
Internet and network service providers are capable of
supporting (or are planning to support) real-time
"push-to-talk" voice communications. The CALEA provides that
a party providing communications services that in the judgment
of the FCC are "a replacement for a substantial portion of the
local telephone exchange service" may be deemed a carrier
subject to the requirements of the CALEA. Thus, one possible
path along which telecommunications services may evolve could
lead to the imposition of CALEA requirements on information
service providers, even though they were exempted as an
essential element of a legislative compromise that enabled the
CALEA to pass in the first place.

   These possibilities are indicative of a more general
problem: the fact that lines between "voice" and "data"
services are being increasingly blurred. This issue is
addressed in greater detail in Chapter 7.

----------

   (3)  Fred Hapgood, "IPHONE," *Wired*, October 1995, p. 140;
and Lawrence M. Fisher, "Long-Distance Phone Calls in the
Internet," *New York Times*, March 14, 1995, p. D-6.

____________________________________________________________


                  6.2 OTHER LEVERS USED IN
                NATIONAL CRYPTOGRAPHY POLICY


   The government has a number of tools to influence the
possession and use of cryptography domestically and abroad.
How the government uses these tools in the context of national
cryptography policy reflects the government's view of how to
balance the interests of the various stakeholders affected by
cryptography.


       6.2.1 Federal Information Processing Standards

   Federal Information Processing Standards (FIPSs) are an
important element of national cryptography policy, and all
federal agencies are encouraged to cite FIPSs in their
procurement specifications. (Box 6.1 contains a brief
description of all FIPSs related to cryptography). The
National Institute of Standards and Technology (NIST) is
responsible for issuing FIPSs.

   FIPSs can have enormous significance to the private sector
as well, despite the face that the existence of a FIPS does
not legally compel a private party to adopt it. One reason is
that to the extent that a FIPS is based on existing
private-sector standards (which it often is), it codifies
standards of existing practice and contributes to a planning
environment of greater certainty. A second reason is that a
FIPS is often taken as a government endorsement of the
procedures, practices, and algorithms contained therein, and
thus a FIPS may set a de facto "best practices" standard for
the private sector. A third reason is related to procurements
that are FIPS-compliant as discussed in the next section.

   NIST has traditionally relied on private sector standards-
setting processes when developing FIPSs. Such practice
reflects NIST's recognition of the fact that the standards it
sets will be more likely to succeed -- in terms of reducing
procurement costs, raising quality, and influencing the
direction of information technology market development -- if
they are supported by private producers and users.(4)

   The existence of widely accepted standards is often an
enormous boon to interoperability of computers and
communication devices, and the converse is generally true as
well: the absence of widely accepted standards often impedes
the growth of a market.

   In the domain of cryptography, FIPSs have had a mixed
result. The promulgation of FIPS 46-1, the Data Encryption
Standard (DES) algorithm for encrypting data, was a boon to
cryptography and vendors of cryptographic products. On the
other hand, the two cryptography-related FIPSs most recently
produced by NIST (FIPS-185, the Escrowed Encryption Standard
(EES), and FIPS-186, the Digital Signature Standard (DSS))
have met with a less favorable response. Neither was
consistent with existing de facto industry standards or
practice, and both met with significant negative response from
private industry and users.(5)

   The promulgation of the EES and the DSS, as well as current
Administration plans to promulgate a modification of the EES
to accommodate escrowed encryption for data storage and
communications and another FIPS for key escrow to performance
requirements for escrow agents and for escrowed encryption
products, has generated a mixed market reaction. Some
companies see the promulgation of these standards as a market
opportunity, while others see these standards as creating yet
more confusion and uncertainty in pushing escrowed encryption
on a resistant market.

   Appendix N contains a general discussion of FIPSs and the
standards-setting process.

---------

   (4)  Cargill, *Information Technology Standardization*, p.
213.

   (5)  The story of resistance to the EES is provided in
Susan Landau et al., *Codes, Keys, and Conflicts*, Association
for Computing Machinery, Washington, D.C., June 1994, p. 48;
to DSS, in Landau et al., 1994, pp. 41-43. In the case of DSS,
a de facto industry standard had already emerged based on
RSA's public-key algorithm.

____________________________________________________________


          6.2.2 The Government Procurement Process

   Government procurement occurs in two domains. One domain is
special-purpose equipment and products, for which government
is the only consumer. Such products are generally classified
in certain ways; weapons and military-grade cryptography are
two examples. The other domain is procurement of products that
are useful in both the private and public sectors.

   Where equipment and products serve both government and
private sector needs, in some instances the ability of the
government to buy in bulk guarantees vendors a large enough
market to take advantage of mass production, thereby driving
down for all consumers the unit costs of a product that the
government was buying in bulk. Through its market power,
government has some ability to affect the price of products
that are offered for sale on the open market. Furthermore,
acceptance by the government is often taken as a "seal of
approval" for a given product that reassures potential buyers
in the private sector.

   History offers examples with variable success in promoting
the widespread public use of specific information technologies
through the use of government standards.

   +    The DES was highly successful. DES was first adopted
as a cryptographic standard for federal use in 1975. Since
then, its use has become commonplace in cryptographic
applications around the world, and many implementations of DES
now exist worldwide.

   +    A less successful standard is GOSIP, the Government
OSI Profile, FIPS-146.(6) The GOSIP was intended to specify
the details of an OSI configuration for use in the government
so that interoperable OSI network products could be procured
from commercial vendors and to encourage the market
development of products. GOSIP has largely failed in this
effort, and network products based on the TCP/IP protocols now
dominate the market.(7)

   In the case of the EES, the government chose not to seek
legislation outlawing cryptography without features for
exceptional access, but chose instead to use the EES to
influence the marketplace for cryptography. This point was
acknowledged by Administration officials to the committee on
a number of occasions. Specifically, the government hoped that
the adoption of the EES to ensure secure communications within
the government and for communications of other parties with
the federal government would lead to a significant demand for
EES-compliant devices, thus making possible production in
larger quantities and thereby driving unit costs down and
making EES-compliant devices more attractive to other users.
A secondary effect would be the fact that two nongovernmental
parties wishing to engage in secure communications would be
most likely to use EES-compliant devices if they already own
them rather than purchase other devices. As part of this
strategy to influence the market, the government persuaded
AT&T in 1992 to base a secure telephone on the EES.

   In the case of the Fortezza card, the large government
procurement for use with the Defense Messaging System may well
lower unit costs sufficiently that vendors of products
intended solely for the commercial nondefense market will
build support for the Fortezza card into their products.(8)
Given the wide availability of PC-Card slots on essentially
all notebook and laptop computers, it is not inconceivable
that the security advantages offered by hardware-based
authentication would find a wide commercial market. At the
same time, the disadvantages of hardware-based cryptographic
functionality discussed in Chapter 5 would remain as well.

----------

   (6)  OSI refers to Open Systems Interconnect, a
standardized suite of international networking protocols
developed and promulgated in the early 1980s.

   (7)  See CSTB, *Realizing the Information Future*, 1994,
Chapter 6.

   (8)  In a recent contract, a vendor agreed to provide
Fortezza cards at $69 per card. See Paul Constance, "After
Complaining $99 Was Too Low, Fortezza Vendors Come in at $69,"
*Government Computer News*, October 2, 1995, p. 6.

____________________________________________________________


               6.2.3 Implementation of Policy:
         Fear, Uncertainty, Doubt, Delay, Complexity

   The implementation of policy contributes to how those
affected by policy will respond to it. This important element
is often unstated, and it refers to the role of government in
creating a climate of predictability. A government that speaks
with multiple voices on a question of policy, or one that
articulates isolated elements of policy in a piecemeal
fashion, or one that leaves the stakeholders uncertain about
what is or is not permissible, creates an environment of fear,
uncertainty, and doubt that can inhibit action. Such an
environment can result from a deliberate choice on the part of
policy makers, or it can be inadvertent, resulting from
overlapping and/or multiple sources of authority that may have
at least partial responsibility for the policy area in
question. Decisions made behind closed doors and protected by
government security classifications tend to reinforce the
concerns of those who believe that fear, uncertainty, and
doubt are created deliberately rather than inadvertently.

   The committee observes that cryptography policy has indeed
been shrouded in secrecy for many years and that many agencies
have partial responsibility in this area. It also believes
that fear, uncertainty, and doubt are common in the
marketplace. For example, the introduction of nonmarket-driven
standards such as the DSS and the EES may have created market
uncertainty that impeded the rapid proliferation of
high-quality products with encryption capabilities both
internationally and domestically. Uncertainty over whether or
not the federal government would recertify the DES as a FIPS
has plagued the marketplace in recent years, because
withdrawal of the DES as a FIPS could cause considerable
consternation among some potential buyers that might suddenly
be using products based on a decertified standard, although in
fact the government has recertified the DES in each case. On
the other hand, the DES is also a standard of the American
National Standards Institute and the American Banking
Association, and if these organizations retain their
endorsement of the DES, the DES will arguably represents a
viable algorithm for a wide range of products.

   Many parties in industry believe that the complexity and
opacity of the decisionmaking process with respect to
cryptography are major contributors to this air of
uncertainty. Of course, the creation of uncertainty may be
desirable from the perspective of policy makers if their goal
is to retard action in a given area. Impeding the spread of
high-quality products with encryption capabilities
internationally is the stated and explicit goal of export
controls; on the domestic front, impeding the spread of
high-quality products with encryption capabilities has been a
desirable outcome from the standpoint of senior officials in
the law enforcement community.

   A very good example of the impact of fear, uncertainty, and
doubt on the marketplace for cryptography can be found in the
impact of government action (or more precisely, inaction) with
respect to authentication. As noted in Chapter 2, cryptography
supports digital signatures, a technology that provides high
assurance for both data integrity and user authentication.
However, federal actions in this area have led to considerable
controversy. One example is that the federal government failed
to adopt what was (and still is) the de facto commercial
standard algorithm on digital signatures, namely the RSA
algorithm. Government sources told the committee that the fact
that the RSA algorithm is capable of providing strong
confidentiality as well as digital signatures was one reason
that the government deemed it inappropriate for promulgation
as a FIPS.(9) Further, the government's adoption of the
Digital Signature Standard(10) in 1993 occurred despite
widespread opposition from industry to the specifics of that
standard.

____________________________________________________________


                      6.2.4 R&D Funding

   An agency that supports research (and/or conducts such
research on its own in-house) in a given area of technology is
often able to shape the future options from which the private
sector and policy makers will choose. For example, an agency
that wishes to maintain a monopoly of expertise in a given
area may not fund promising research proposals that originate
from outside. Multiple agencies active in funding a given area
may thus yield a broader range of options for future policy
makers.

   In the context of cryptography and computer and
communications security, it is relevant that the National
Security Agency (NSA) has been the main supporter and
performer of R&D in this area.(11) The NSA's R&D orientation
has been, quite properly, on technologies that would help it
to perform more effectively and efficiently its two basic
missions: (1) defending national security by designing and
deploying strong cryptography to protect classified
information and (2) performing signals intelligence against
potential foreign adversaries. In the information security
side of the operation, NSA-developed technology has
extraordinary strengths that have proven well suited to the
protection of classified information relevant to defense or
foreign policy needs.

   How useful such technologies will prove for corporate
information security remains to be seen. Increasing needs for
information security in the private sector suggest that NSA
technology may have much to offer, especially if such
technology can be made available to the private sector without
limitation. At the same time, the environment in which private
sector information security needs are manifested may be
different enough from the defense and foreign policy worlds
that these technologies may not be particularly relevant in
practice to the private sector. Furthermore, the rapid pace of
commercial developments in information technology may make it
difficult for the private sector to use technologies developed
for national security purposes in a less rapidly changing
environment.

   These observations suggest that commercial needs for
cryptographic technology may be able to draw on NSA
technologies for certain applications, and most certainly will
draw on nonclassified R&D work in cryptography (both in the
United States and abroad); even the latter will have a high
degree of sophistication. Precisely how the private sector
will draw on these two sources of technology will depends on
policy decisions to be made in the future. Finally, it is
worth noting that nonclassified research on cryptography
appearing in the open literature has been one of the most
important factors leading to the dilemma that policy makers
face today with respect to cryptography.

----------

   (9)  The specific concern was that widespread adoption of
RSA as a signature standard would result in an infrastructure
that could support the easy and convenient distribution of DES
keys. The two other reasons for the government's rejection of
RSA were the desire to promulgate an approach to digital
signatures that would be royalty-free (RSA is a patented
algorithm) and the desire to reduce overall system costs for
digital signatures. For a discussion of the intellectual
issues involved in the rejection of the RSA algorithm and the
concern over confidentiality, see Office of Technology
Assessment, *Information Security and Privacy in Network
Environments*, Washington, D.C., September 1994, pp. 167-168
and pp. 217-222.

   (10) The Digital Signature Standard (DSS) is based on an
unclassified algorithm known as the Digital Signature
Algorithm that does not explicitly support confidentiality.
However, the DSS and its supporting documentation do amount to
U.S. government endorsement of a particular one-way hash
function, and document in detail how to generate the
appropriate number-theoretic constants needed to implement it.
Given this standard, it is possible to design a
confidentiality standard that is as secure as the DSS. In
other words, the DSS is a road map to a confidentiality
standard, although it is not such a standard explicitly.
Whether an ersatz confidentiality standard would pass muster
in the commercial market remains to be seen.

   (11) It is important to distinguish between R&D undertaken
internally and externally to NSA. Internal R&D work can be
controlled and kept private to NSA; by contrast, it is much
more difficult to control the extent to which external R&D
work is disseminated. Thus, decisions regarding specific
external cryptography-related R&D projects could promote or
inhibit public knowledge of cryptography.

____________________________________________________________


           6.2.5 Patents and Intellectual Property

   A number of patents involving cryptography have been
issued. Patents affect cryptography because patent protection
can be used by both vendors and governments to keep various
patented approaches to cryptography out of broad use in the
public domain.(12)

   The DES, first issued in 1977, is an open standard, and the
algorithm it uses is widely known. According to NIST, devices
implementing the DES may be covered by U.S. and foreign
patents issued to IBM (although the original patents have by
now expired).(13) However, IBM granted nonexclusive,
royalty-free licenses under the patents to make, use, and sell
apparatus that complies with the standard.

   RSA Data Security Inc. (RSA) holds the licensing rights to
RC2, RC4, and RC5, which are variable-key-length ciphers
developed by Ronald Rivest.(14) RC2 and RC4 are not patented,
but rather, are protected as trade secrets (although their
algorithms have been published on the Internet without RSA's
approval). RSA has applied for a patent for RC5 and has
proposed it as a security standard for the Internet. Another
alternative for data encryption is IDEA, a block cipher
developed by James Massey and Xueija Lai of the Swiss Federal
Institute of Technology (ETH), Zurich. The patent rights to
IDEA are held by Ascom Systec AG, a Swiss firm. IDEA is
implemented in the software application, PGP.

   In addition to the above patents, which address
symmetric-key encryption technologies, there are several
important patent issues related to public-key cryptography.
The concept of public-key cryptography, as well as some
specific implementing methods, are covered by U.S. Patents
4,200,770 (M. Hellman, W. Diffie, and R. Merkle, 1980) and
4,218,582 (M. Hellman and R. Merkle, 1980), both of which are
owned by Stanford University. The basic patent for the RSA
public-key crypto-system, U.S. Patent 4,405,829 (R. Rivest, A.
Shamir, and L. Adleman, 1983), is owned by the Massachusetts
Institute of Technology. The -582 patent has counterparts in
several other countries. These basic public-key patents and
related ones have been licensed to many vendors worldwide.
With the breakup of the partnership that administered the
licensing of Stanford University's and MIT's patents, the
validity of the various patents has become the subject of
current litigation. In any event, the terms will expire in
1997 for the first two of the above patents and in 2000 for
the third.(15)

   In 1994, NIST issued the Digital Signature Standard, FIPS
186. The DSS uses the NIST-developed Digital Signature
Algorithm, which according to NIST is available for use
without a license. However, during the DSS's development,
concern arose about whether the DSS might infringe on the
public-key patents cited above, as well as a patent related to
signature verification held by Claus Schnorr of Goethe
University in Frankfurt, Germany.(16) NIST asserts that the
DSS does not infringe on any of these patents.(17) At the
least, U.S. government users have the right to use public-key
cryptography without paying a license fee for the Stanford and
MIT patents because the concepts were developed at these
universities with federal research support. However, there
remains some disagreement about whether commercial uses of the
DSS (for example, in a public-key infrastructure) will require
a license from one or more of the various patent holders.

   A potential patent dispute regarding the key-escrow
features of the EES may have been headed off by NIST's
negotiation of a nonexclusive licensing agreement with Silvio
Micali in 1994.(18) Micali has patents that are relevant to
dividing a key into components that can be separately
safeguarded (e.g., by escrow agents) and later combined to
recover the original key.

   A provision of the U.S. Code (Title 35, US Code 181) allows
the Patent and Trademark Office (PTO) to withhold a patent and
order that the invention be kept secret if publication of the
patent is detrimental to national security. Relevant to
cryptography is the fact that a patent application for the
Skipjack encryption algorithm was filed on February 7, 1994.
This application was examined and all of the claims allowed,
and notification of the algorithm's patentability was issued
on March 28, 1995. Based on a determination by NSA, the Armed
Services Patent Advisory Board issued a secrecy order for the
Skipjack patent application; the effect of the secrecy order
is that even though Skipjack can be patented, a patent will
not be issued until the secrecy order is rescinded. Since
applications are kept in confidence until a patent is issued,
no uninvolved party can find out any information concerning
the application. In this way, the patentability of the
algorithm has been established without having to disclose the
detailed information publicly.(19) Since Title 35 USC 181 also
provides that the PTO can rescind the secrecy order upon
notification that publication is no longer detrimental to
national security, compromise and subsequent public revelation
of the Skipjack algorithm (e.g., through reverse-engineering
of a Clipper chip) might well cause a patent to be issued for
Skipjack that would give the U.S. overnment control over its
subsequent use in products.

----------

   (12) See footnote 9.

   (13) National Institute of Standards and Technology, "FIPS
46-2: Announcing the Data Encryption Standard," December 30,
1993.

   (14) See RSA Data Security Inc., home page, at
http://www.rsa.com.

   (15) In 1994, Congress changed patent terms from 17 years
after issuance to 20 years from the date of filing the patent
application; however, applications for these patents were
filed in or before 1977, and so they will not be affected.

   (16) See Office of Technology Assessment, *Information
Security and Privacy in Network Environments*, September 1994,
p. 220.

   (17) National Institute of Standards and Technology,
"Digital Signature Standard," *Computer Systems Laboratory
(CSL) Bulletin*, NIST, Gaithersburg, Maryland, November 1994.
Available on line from http://csrc.ncsl.nist.gov/nistbul/
csl94-11.txt.

   (18) National Institute of Standards and Technology press
release, "Patent Agreement Removes Perceived Barrier to
Telecommunications Security System," NIST, Gaithersburg,
Maryland, July 11, 1994. Available on line from
gopher://rigel.nist.gov:7346/0/.docs/.releases/N94-28.REL.

   (19) Clinton C. Brooks, National Security Agency, provided
this information to the committee in an e-mail message dated
May 23, 1995.

____________________________________________________________


           6.2.6 Formal and Informal Arrangements
      with Various Other Governments and Organizations

   International agreements can be an important part of
national policy. For example, for many years the CoCom nations
cooperated in establishing a common export control policy on
militarily significant items with civilian purposes, including
cryptography (Appendix G has more details).

   International agreements can take a variety of different
forms. The most formal type of agreement is a treaty between
(or among) nations that specifies the pemmissible, required,
and prohibited actions of the various nations. Treaties
require ratification by the relevant national political bodies
as well as signature before entry into force. In the United
States treaties must be approved by the Senate by a two-thirds
vote. Sometimes treaties are self-executing, but often they
need to be followed by implementing legislation enacted by the
Congress in the normal manner for legislation.

   Another type of agreement is an executive agreement. In the
United States, executive agreements are, as the name implies,
entered into by the executive branch. Unlike the treaty, no
Senate ratification is involved, but the executive branch has
frequently sought approval by a majority of both houses of the
Congress. For all practical purposes executive agreements with
other countries bind the United States in international law
just as firmly as treaties do, although the treaty may carry
greater weight internally due to the concurrence by a
two-thirds vote of the Senate. Executive agreements can also
be changed with much greater flexibility than treaties.

   Finally, nations can agree to cooperate through diplomacy.
Even though cooperation is not legally required under such
arrangements, informal understandings can work very
effectively so long as relationships remain good and the
countries involved continue to have common goals. In fact,
informal understanding is the main product of much diplomacy
and is the form that most of the world's business between
governments takes. For example, although the United States
maintains formal mutual legal assistance treaties with a
number of nations, U.S. law enforcement agencies cooperate
(sometimes extensively) with foreign counterparts in a much
larger number of nations. Indeed, in some instances, such
cooperation is stronger, more reliable, and more extensive
than is the case with nations that are a party to a formal
mutual legal assistance treaty with the United States.

   Note that the more formal the agreement, the more public is
the substance of the agreement; such publicity often leads to
attention that may compromise important and very sensitive
matters, such as the extent to which a nation supports a given
policy position or the scope and nature of a nation's
capabilities. When informal arrangements are negotiated and
entered into force, they may not be known by all citizens or
even by all parts of the governments involved. Because they
are less public, informal arrangements also allow more
latitude for governments to make decisions on a case-by-case
basis. In conducting negotiations that may involve sensitive
matters or agreements that may require considerable
flexibility, governments are often inclined to pursue more
informal avenues of approach.


             6.2.7 Certification and Evaluation

   Analogous to Good Housekeeping seals of approval or "check
ratings" for products reviewed in Consumer Reports,
independent testing and certification of products can provide
assurance in the commercial marketplace that a product can
indeed deliver the services and functionality that it purports
to deliver. For example, the results of government crash tests
of automobiles are widely circulated as data relevant to
consumer purchases of automobiles. Government certification
that a commercial airplane is safe to fly provides significant
reassurance to the public about flight safety. At the same
time, while evaluation and certification would in principle
help users to avoid products that implement a sound algorithm
in a way that undermines the security offered by the
algorithm, the actual behavior of users demonstrates that
certification of a product is not necessarily a selling point.
Many of the DES products in the United States have never been
evaluated relative to FS-1027 or FIPS 140-1, and yet such
products are used by many parties.

   The government track record in the cryptography and
computer security domain is mixed. For example, a number of
DES products were evaluated with respect to FS-1027 (the
precursor to FIPS 140-1) over several years and a number of
products were certified by NSA. For a time, government
agencies purchased DES hardware only if it met FS-1027, or
FIPS-140. Commercial clients often required compliance because
it provided the only assurance that a product embodying DES
was secure in a broader sense. In this case, the alignment
between government and commercial security requirements seems
to have been reasonably good and thus this program had some
success. Two problems with this evaluation program were that
it addressed only hardware and that it lagged in allowing use
of public-key management technology in products (in the
absence of suitable standards).

   A second attempt to provide product evaluation was
represented by the National Computer Security Center (NCSC),
which was established by the Department of Defense for the
purpose of certifying various computer systems for security.
The theory underlying the center was that the government
needed secure systems but could not afford to build them. The
quid pro quo was that industry would design and implement
secure operating systems that the government would test and
evaluate at no cost to industry, systems meeting government
requirements would receive a seal of approval.

   Although the NCSC still exists, the security evaluation
program it sponsors, the Trusted Product Evaluation Program
(TPEP), has more or less lapsed into disuse. In the judgment
of many, the TPEP was a relative failure because of an
underlying premise that the information security problems of
the civil government and the private sector were identical to
those of the defense establishment. In fact, the private
sector has for the most part found a military approach to
computer security inadequate for its needs. A second major
problem was that the time scale of the evaluation process was
much longer than the private sector could tolerate, and
products that depended on NCSC evaluation would reach market
already on the road to obsolescence, perhaps superseded by a
new version to which a given evaluation would not necessarily
apply. In late 1995, articles in the trade press reported that
the Department of Defense was attempting to revive the
evaluation program in a way that would involved private
contractors.(20)

   A recent attempt to provide certification services is the
Cryptographic Module Validation Program (CMVP) to test
products for conformance to FIPS 140-1, *Security Requirements
for Cryptographic Modules*.(21) FIPS 140-1 provides a broad
framework for all NIST cryptographic standards, specifying
design, function, and documentation requirements for
cryptographic modules -- including hardware, software,
"firmware," and combinations thereof -- used to protect
sensitive, unclassified information in computer and
telecommunication systems.(22) The CMVP was established in
July 1995 by NIST and the Communications Security
Establishment of the government of Canada.

   The validation program is currently optional: agencies may
purchase products based on the vendor's written assurance of
compliance with the standard. However, beginning in 1997, U.S.
federal procurement will require cryptographic products to be
validated by an independent, third party. Under the program,
vendors will submit their product for testing by an
independent, NIST-accredited laboratory.(23)

   Such a laboratory evaluates both the product and its
associated documentation against the requirements in FIPS
140-1. NIST has also specified test procedures for all aspects
of the standard. Examples include attempting to penetrate
tamper-resistant coatings and casings, inspecting software
source code and documentation, attempting to bypass protection
of stored secret keys, and statistically verifying the
performance of random number generators.(24) The vendor sends
the results of independent tests to NIST, which determines
whether these results show that the tested product complies
with the standard and then issues validation certificates for
products that do. Time will tell whether the CMVP will prove
more successful than the NCSC.

----------

   (20) See for example, Paul Constance, "Secure Products List
Gets CPR," *Government Computing News*, November 13, 1995, p.
40.

   (21) National Institute of Standards and Technology press
release, "Cryptographic Module Validation Program Announced,"
NIST, Gaithersburg, Maryland, July 17, 1995.

   (22) National Institute of Standards and Technology,
*Federal Information Processing Standards Publication 140-1:
Security Requirements for Cryptographic Modules*, NIST,
Gaithersburg, Maryland, January 11, 1994.

   (23) As of September 1995, the National Institute of
Standards and Technology's National Voluntary Laboratory
Accreditation Program had accredited three U.S. companies as
competent to perform the necessary procedures: CygnaCom
Solutions Laboratory (McLean, Va.), DOMUS Software Limited
(Ottawa, Canada), and InfoGard Laboratories (San Luis Obispo,
Calif.). A current list of these companies is available from
http://csrc.ncsl.nist.gov/fips/1401labs.txt.

   (24) National Institute of Standards and Technology,
*Derived Test Requirements for FIPS PUB 140-1*, NIST,
Gaithersburg, Maryland, March 1995.

____________________________________________________________


                6.2.8 Nonstatutory Influence

   By virtue of its size and role in society, government has
considerable ability to influence public opinion and to build
support for policies. In many cases, this ability is not based
on specific legislative authority, but rather on the use of
the "bully pulpit." For example, the government can act in a
convening role to bring focus and to stimulate the private
sector to work on a problem.(25) The bully pulpit can be used
to convey a sense of urgency that is tremendously important in
how the private sector reacts, especially large companies that
try to be good corporate citizens and responsive to informal
persuasion by senior government officials. Both vendors and
users can be influenced by such authority.(26)

   In the security domain, the Clinton Administration has
sponsored several widely publicized public meetings to address
security dimensions of the national information infrastructure
(NII). These meetings were meetings of the NII Security Issues
Forum, held in 1994 and 1995.(27) They were announced in the
*Federal Register* and were intended to provide a forum in
which members of the interested public could air their
concerns about security.

   In the cryptography domain, the U.S. government has used
its convening authority to seek comments on various proposed
cryptographic starldards and to hold a number of workshops
related to key escrow (discussed in Chapter 5). Many in the
affected communities believe that these attempts at outreach
were too few and too late to influence anything more than the
details of a policy outline upon which government had already
decided. A second example demonstrating government's
nonstatutory influence was the successful government request
to AT&T to base the 3600 Secure Telephone Unit on the Clipper
chip instead of an unescrowed DES chip (as described in
Appendix E).

----------

   (25) One advantage of government's acting in this way is
that it may provide some assurance to the private sector that
any coordinated action they may take in response to government
calls for action will be less likely to be interpreted by
government as a violation of antitrust provisions.

   (26) For example, in responding favorably to a request by
President Clinton for a particular action in a labor dispute,
the chairman of American Airlines noted, "He [President
Clinton] is the elected leader of the country. For any citizen
or any company or any union to say 'No, I won't do that' to
the President requires an awfully good reason." See Gwen
Ifill, "Strike at American Airlines; Airline Strike Ends as
Clinton Steps In," *New York Times*, November 23, 1993, p. 1.

   (27) Office of Management and Budget press release,
"National Information Infrastructure Security Issues Forum
Releases 'NII Security: The Federal Role,' " Washington, D.C.,
June 14, 1995. The subjects of these meetings were "Commercial
Security on the NII," which focused on the need for
intellectual property rights protection in the entertainment,
software, and computer industries; "Security of Insurance and
Financial Information"; "Security of Health and Education
Information"; "Security of the Electronic Delivery of
Government Services and Information"; "Security for
Intelligent Transportation Systems and Trade Information"; and
"The NII: Will It Be There When You Need It?," addressing the
availability and reliability of the Internet, the public
switched telecommunicatins network, and cable, wireless, and
satellite communications services. Available on line from
gopher://ntiantl.ntia.doc.gov:70/00/iitf/security/files/
fedworld.txt

____________________________________________________________


                6.2.9 Interagency Agreements
                 Within the Executive Branch

   Given that one government agency may have expertise or
personnel that would assist another agency in doing its job
better, government agencies often conclude agreements between
them that specify the terms and nature of their cooperative
efforts. In the domain of cryptography policy, NSA's technical
expertise in the field has led to memorandums of understanding
with NIST and with the FBI (Appendix L).

   The memorandum of understanding (MOU) between NIST and NSA
outlines several areas of cooperation between the two agencies
that are intended to implement the Computer Security Act of
1987; joint NIST-NSA activities are described in Box 6.2. This
MOU has been the subject of some controversy, with critics
believing that the MOU and its implementation cede too much
authority to NSA and defenders believing that the MOU is
faithful to both the spirit and letter of the Computer
Security Act of 1987.(28)

   The MOU between the FBI and NSA, declassified for the
National Research Council, states that the NSA will provide
assistance to the FBI upon request, when the assistance is
consistent with NSA policy (including protection of sources
and methods), and in accordance with certain administrative
requirements. Furthermore, if the assistance requested is for
the support of an activity that may be conducted only pursuant
to a court order or with the authorization of the Attorney
General, the FBI request to the NSA must include a copy of
that order or authorization.

   In 1995, the National Security Agency, the Advanced
Research Projects Agency, and the Defense Information Systems
Agency signed a memorandum of agreement (MOA) to coordinate
research and development efforts in system security. This MOA
provides for the establishment of the Information System
Security Research Joint Technology Office (ISSR-JTO). The role
of the ISSR-JTO is "to optimize use of the limited research
funds available, and strengthen the responsiveness of the
programs to DISA, expediting delivery of technologies that
meet DISA's requirements to safeguard the confidentiality,
integrity, authenticity, and availability of data in
Department of Defense information systems, provide a robust
first line of defense for defensive information warfare, and
permit electronic commerce between the Department of Defense
and its contractors."(29)

---------

   (28) For more discussion of these critical perspectives,
see U.S. Congress, Office of Technology Assessment,
*Information Security and Privacy in Network Environments*,
OTA-TCT-606, U.S. Government Printing Office, Washington,
D.C., September 1994, Box 4-8, pp. 164-171.

   (29) See "Memorandum of Agreement Between the Advanced
Research Projects Agency, the Defense Information Systems
Agency, and the National Security Agency Concerning the
Information Systems Security Research Joint Technology
Office"; MOA effective April 2, 1995. The full text of the MOA
is available from http://www.ito.darpa.mil/ResearchAreas/
Information_Survivability/MOA.html.

____________________________________________________________


         6.3 ORGANIZATION OF THE FEDERAL GOVERNMENT
            WITH RESPECT TO INFORMATION SECURITY


          6.3.1 Role of National Security vis-a-vis
            Civilian Information Infrastructures

   The extent to which the traditional national security model
is appropriate for an information infrastructure supporting
both civilian and military applications is a major point of
contention in the public debate. There are two schools of
thought on this subject:

   +    The traditional national security model should be
applied to the national information infrastructure, because
protecting those networks also protects services that are
essential to the military, and the role of the defense
establishment is indeed to protect important components of the
national infrastructure that private citizens and businesses
depend upon.(30)

   +    The traditional national security model should not be
applied to the national information infrastructure, because
the needs of civilian activities are so different from those
of the military, and the imposition of a national security
model would impose an unacceptable burden on the civilian
sector. Proponents of this view argue that the traditional
national security model of information security -- a top-down
approach to information security management -- would be very
difficult to scale up to a highly heterogeneous private sector
involving hundreds of millions of people and tens of millions
of computers in the United States alone.

   There is essential unanimity that the world of classified
information (both military and nonmilitary) is properly a
domain in which the DOD and NSA can and should exercise
considerable influence. But moving outside this domain raises
many questions that have a high profile in the public debate.
Specifically, what should the DOD and NSA role be in dealing
with the following categories of information:

   1.   Unclassified government information that is military
        in nature,

   2.   Unclassified government information that is
        nonmilitary in nature, and

   3.   Nongovernment information.

   To date, policy decisions have been made that give the DOD
jurisdiction in information security policy for category 1.
For categories 2 and 3, the debate continues. It is clear that
the security needs for business and for national security
purposes are both similar (Box 6.3) and different (Box 6.4).
In category 2, the argument is made that DOD and NSA have a
great deal of expertise in protecting information, and that
the government should draw on an enormous historical
investment in NSA expertise to protect all government
information. At the same time, NIST has the responsibility for
protecting such information under the Computer Security Act of
1987, with NSA's role being one of providing technical
assistance. Some commentators believe that NIST has not
received resources adequate to support its role in this
area.(31)

   In category 3, the same argument is made with respect to
nongovernment information on the grounds that the proper role
of government is to serve the needs of the entire nation. A
second argument is made that the military depends critically
on nongovernment information infrastructures (e.g., the public
switched telecommunications network) and that it is essential
to protect those networks not just for civilian use but also
for military purposes. (Note that NSA does not have broad
authority to assist private industry with information
security, although it does conduct for industry, upon request,
unclassified briefings related to foreign information security
threats; NSD-42 (text provided in Appendix L) also gives NSA
the authority to work with private industry when such work
involves national security information systems used by private
industry.)

----------

   (30) For example, the Joint Security Commission recommended
that "policy formulation for information systems security be
consolidated under a joint DoD/DCI security executive
committee, and that the committee oversee development of a
coherent network-oriented information systems security policy
for the DoD and the Intelligence Community that could also
serve the entire government." See Joint Security Commission,
*Redefining Security*, Washington, D.C., February 28, 1994, p.
107.

   (31) For example, the Office of Technology Assessment
stated that "the current state of government security practice
for unclassified information has been depressed by the chronic
shortage of resources for NlST's computer security activities
in fulfillment of its government-wide responsibilities under
the Computer Security Act of 1987. Since enactment of the
Computer Security Act, there has been no serious (i.e.,
adequately funded and properly staffed), sustained effort to
establish a center of information-security expertise and
leadership outside the defense/intelligence communities." See
U.S. Congress, Office of Technology Assessment, *Issue Update
on Information Security and Privacy in Network Environments*,
OTA-BP-ITC-147, U.S. Government Printing Office, Washington,
D.C., June 1995, p. 42. A similar conclusion was reached by
the Board on Assessment of NIST Programs of the National
Research Council, which wrote that "the Computer Security
Division is severely understaffed and underfunded given its
statutory security responsibilities, the growing national
recognition of the need to protect unclassified but sensitive
information, and the unique role the division can play in
fostering security in commercial architectures, hardware, and
software." See Board on Assessment of NIST Programs, National
Research Council, *An Assessment of the National Institute of
Standards and Technology*, Fiscal Year 1993, National Academy
Press, Washington, D.C., 1994, p. 228.

____________________________________________________________


            6.3.2 Other Government Entities with
              Influence on Information Security

   As noted above, NSA has primary responsibility for
information security in the classified domain, while NIST has
primary responsibility for information security in the
unclassified domain, but for government information only. No
organization or entity within the federal government has the
responsibility for promoting information security in the
private sector.(32)

   The Security Policy Board (SPB) does have a coordination
function. Specifically, the charge of the SPB is to consider,
coordinate, and recommend for implementation to the President
policy directives for U.S. security policies, procedures, and
practices, including those related to security for both
classified and unclassified government information. The SPB is
intended to be the principal mechanism for reviewing and
proposing legislation and executive orders pertaining to
security policy, procedures, and practices. The Security
Policy Advisory Board provides a nongovernmental perspective
on security policy initiatives to the SPB and independent
input on such matters to the President. The SPB does not have
operational responsibilities.

   Other entities supported by the federal government have
some influence over information security, though little actual
policy-making authority. These include:

   +    The Computer Emergency Response Team (CERT). CERT was
formed by the Defense Advanced Research Projects Agency
(DARPA) in November 1988 in response to the needs exhibited
during the Internet worm incident. CERT's charge is to work
with the Internet community to facilitate its response to
computer security events involving Internet hosts, to take
proactive steps to raise the community's awareness of computer
security issues, and to conduct research targeted at improving
the security of existing systems.(33) CERT offers
around-the-clock technical assistance for responding to
computer security incidents, educates users regarding product
vulnerability through technical documents and seminars, and
provides tools for users to undertake their own vulnerability
analyses.

   +    The Information Infrastructure Task Force's (IITF)
National Information Infrastructure Security Issues Forum. The
forum is charged with addressing institutional, legal, and
technical issues surrounding security in the NII. A draft
report issued by the forum proposes federal actions to address
these issues.(34) The intent of the report, and of the
Security Issues Forum more generally, is to stimulate a
dialogue on how the federal government should cooperate with
other levels of government and the private sector to ensure
that participants can trust the NII. The draft report proposes
a number of security guidelines (proposed NII security
tenets), the adoption of Organization of Economic Cooperation
and Development security principles for use on the NII, and a
number of federal actions to promote security.

   +    The Computer System Security and Privacy Advisory
Board (CSSPAB). CSSPAB was created by the Computer Security
Act of 1987 as a statutory federal public advisory committee.
The law provides that the board shall identify emerging
managerial, technical, administrative, and physical safeguard
issues relative to computer systems security and privacy;
advise the National Institute of Standards and Technology and
the secretary of commerce on security and privacy issues
pertaining to federal computer systems; and report its
findings to the secretary of commerce, the directors of the
Office of Management and Budget and the National Security
Agency, and the appropriate committees of the Congress. The
board's scope is limited to federal computer systems or those
operated by a contractor on behalf of the federal government
and which process sensitive but unclassified information. The
board's authority does not extend to private sector systems,
systems that process classified information, or DOD
unclassified systems related to military or intelligence
missions as covered by the Warner Amendment (10 USC 2315). The
activities of the board bring it into contact with a broad
cross section of the nondefense agencies and departments;
consequently, it often deals with latent policy considerations
and societal consequences of information technology.

   +    The National Counterintelligence Center (NACIC).
Established in 1994 by Presidential Decision Directive NSC-24,
NACIC is primarily responsible for coordinating national-level
counterintelligence activities, and it reports to the National
Security Council. Operationally, the NACIC works with private
industry through an industry council (consisting of senior
security officials or other senior officials of major U.S.
corporations) and sponsors counterintelligence training and
awareness programs, seminars, and conferences for private
industry. NACIC also produces coordinated national-level,
all-source, foreign intelligence threat assessments to support
private sector entities having responsibility for the
protection of classified, sensitive, or proprietary
information, as well as such assessments for government
use.(35)

   In addition, a number of private organizations (e.g., trade
or professional groups) are active in information security.

----------

   (32) This observation was also made in Computer Science and
Telecommunications Board (CSTB), National Research Council,
*Computers at Risk*, National Academy Press, Washington, D.C.,
1991, a report that proposed an Information Security
Foundation as the most plausible type of organization to
promote information security in the private sector.

   (33) Available on line at http://www.sei.cmu.edu/
technology/cert.faqintro.html.

   (34) Office of Management and Budget press release,
"National Information Infirastructure Security Issues Forum
Releases 'NII Security: The Federal Role,' " Washington, D.C.,
June 14, 1995. Available on line from gopher://ntiantl.
ntia.doc.gov:70/00/iitf/security/files/fedworld.txt.

   (35) National Counterintelligence Center,
*Counterintelligence News and Developments*, Issue No. 1,
NACIC, Washington, D.C. This newsletter can be obtained from
http://www.oss.net/oss.

____________________________________________________________

     6.4 INTERNATIONAL DIMENSIONS OF CRYPTOGRAPHY POLICY


   The cryptography policy of the United States must take into
account a number of international dimensions. Most
importantly, the United States does not have the unquestioned
dominance in the economic, financial, technological, and
political affairs of the world as it might have had at the end
of World War II. Indeed, the U.S. economy is increasingly
intertwined with that of other nations. To the extent that
these economically significant links are based on
communications that must be secure, cryptography is one aspect
of ensuring such security. Differing national policies on
cryptography that lead to difficulties in communicating
internationally work against overall national policies that
are aimed at opening markets and reducing commercial and trade
barriers.

   Other nations have the option to maintain some form of
export controls on cryptography, as well as controls on
imports and use of cryptography; such controls form part of
the context in which U.S. cryptography policy must be
formulated. Specifically, foreign export control regimes more
liberal than that of the United States have the potential to
undercut U.S. export control efforts to limit the spread of
cryptography. On the other hand, foreign controls on imports
and use of cryptography could vitiate relaxation of U.S.
export control laws; indeed, relaxation of U.S. export
controls laws might well prompt a larger number of nations to
impose additional barriers on the import and use of
cryptography within their borders. Finally, a number of other
nations have no explicit laws regarding the use of
cryptography, but nevertheless have tools at their disposal to
discourage its use; such tools include laws related to the
postal, telephone, and telegraph (PTT) system, laws related to
content carried by electronic media, laws related to the
protection of domestic industries that discourage the entry of
foreign products, laws related to classification of patents,
and informal arrangements related to licensing of businesses.

   As a first step in harmonizing cryptography policies across
national boundaries, the Organization for Economic Cooperation
and Development (OECD) held a December 1995 meeting in France
among member nations to discuss how these nations were
planning to cope with the public policy problems posed by
cryptography. What the Paris meeting made clear is that many
OECD member nations are starting to come to grips with the
public policy problems posed by encryption, but that the
dialog on harmonizing policies across national borders has not
yet matured. Moreover, national policies are quite fluid at
this time, with various nations considering different types of
regulation regarding the use, export, and import of
cryptography.

   Appendix G contains more discussion of international issues
relevant to national cryptography policy.


                          6.5 RECAP


   While export controls and escrowed encryption are
fundamental pillars of current national cryptography policy,
many other aspects of government action also have some bearing
on it: The Communications Assistance for Law Enforcement
(Digital Telephony) Act calls attention to the relationship
between access to a communications stream and government
access to the plaintext associated with that digital stream.
The former problem must be solved (and was solved, by the
CALEA for telephone communications) before the latter problem
is relevant.

   The government can influence the deployment and use of
cryptography in many ways. Federal Information Processing
Standards often set a "best practice" standard for the private
sector, even though they have no official standing outside
government use. By assuring large-volume sales when a product
is new, government procurement practices can reduce the cost
of preferred cryptography products to the private sector,
giving these products a price advantage over possible
competitors. Policy itself can be implemented in ways that
instill action-inhibiting uncertainty in the private sector.
Government R&D funding and patents on cryptographic algorithms
can narrow technical options to some degree. Formal and
informal arrangements with various other governments and
organizations can promote various policies or types of
cooperation. Product certification can be used to provide the
information necessary for a flourishing free market in
products with encryption capabilities. Convening authority can
help to establish the importance of a topic or approach to
policy.

   In some ways, the debate over national cryptography policy
reflects a tension in the role of the national security
establishment with respect to information infrastructures that
are increasingly important to civilian use. In particular, the
use of cryptography has been the domain of national security
and foreign policy for most of its history, a history that has
led to a national cryptography policy that today has the
effect of discouraging the use of cryptography in the private
sector.

____________________________________________________________

                BOX 6.1 Cryptography-related
          Federal Information Processing Standards

   FIPS 46, 46-1 and 46-2: Data Encryption Standard (DES).
Specification of DES algorithm and rules for implementing DES
in hardware. FIPS 46-1 recertifies DES and extends it for
software implementation. FIPS 46-2 reaffirms the Data
Encryption Standard algorithm until 1998 and allows for its
implementation in software, firmware or hardware. Several
other FlPSs address interoperability and security requirements
for using DES in the physical layer of data communications
(FIPS 139) and in fax machines (FIPS 141), guidelines for
implementing and using DES (FIPS 74), modes of operation of
DES (FIPS 81), and use of DES for authentication purposes
(FIPS 113).

   FIPS 180-1: Secure Hash Standard. This standard specifies
a Secure Hash Algorithm (SHA) that can be used to generate a
condensed representation of a message called a message digest.
The SHA is required for use with the Digital Signature
Algorithm (DSA) as specified in the Digital Signature Standard
(DSS) and whenever a secure hash algorithm is required for
federal applications. The SHA is used by both the transmitter
and intended receiver of a message in computing and verifying
a digital signature.

   FIPS 186: Digital Signature Standard. This standard
specifies a Digital Signature Algorithm (DSA) appropriate for
applications requiring a digital rather than a written
signature. The DSA digital signature is a pair of large
numbers represented in a computer as strings of binary digits.
The digital signature is computed using a set of rules (i.e.,
the DSA) and a set of parameters such that the identity of the
signatory and integrity of the data can be verified. The DSA
provides the capability to generate and verify signatures.

   FIPS 140-1: Security Requirements for Cryptographic
Modules. This standard provides specifications for
cryptographic modules which can be used within computer and
telecommunications systems to protect unclassified information
in a variety of different applications.

   FIPS 185: Escrowed Encryption Standard (see main text).

   FIPS 171: Key Management Using ANSI X9.17. This standard
specifies a selection of options for the automated distributed
of keying material by the federal government when using the
protocols of ANSI X9.17. The standard defines procedures for
the manual and automated management of keying materials and
contains a number of options. The selected options will allow
the development of cost-effective systems that will increase
the likelihood of interoperability.

   Other FlPSs address matters related more generally to
computer security.

   FIPS 48: Guidelines on Evaluation of Techniques for
Automated Personal Identification.
             FIPS 83: Guidelines on User Authentication Techniques for
Computer Network Access Control.

   FIPS 112: Password Usage.

   FIPS 113: Computer Data Authentication.

   FIPS 73: Guidelines for Security of Computer Applications.

____________________________________________________________


        BOX 6.2 Overview of Joint NIST-NSA Activities

   The National Security Agency provides technical advice and
assistance to the National Institute of Standards and
Technology in accordance with Public Law 100-235, the Computer
Security Act of 1987. An overview of NIST-NSA activities
follows.

   National conference. NIST and NSA jointly sponsor,
organize, and chair the prestigious National Computer Security
Conference, held yearly for the past 16 years. The conference
is attended by over 2,000 people from government and private
industry.

   Common criteria. NSA is providing technical assistance to
NIST for the development of computer security criteria that
would be used by both the civilian and defense sides of the
government. Representatives from Canada and Europe are joining
the United States in the development of the criteria.

   Product evaluations. NIST and NSA are working together to
perform evaluations of computer security products. In the
Trusted Technology Assessment Program, evaluations of some
computer security products will be performed by NIST and its
laboratories, while others will be performed by NSA. NIST and
NSA engineers routinely exchange information and experiences
to ensure uniformity of evaluations.

   Standards development. NSA supports NIST in the development
of standards that promote interoperability among security
products. Sample standards include security protocol
standards, digital signature standards, key management
standards, and encryption algorithm standards (e.g., the DES,
Skipjack).

   Research and development. Under the Joint R&D Technology
Exchange Program, NIST and NSA hold periodic technical
exchanges to share information on new and ongoing programs.
Research and development are performed in areas such as
security architectures, labeling standards, privilege
management, and identification and authentication. Test-bed
activities are conducted in areas related to electronic mail,
certificate exchange and management, protocol conformity, and
encryption technologies.

----------

SOURCE: National Security Agency, April 1994 (as printed in
U.S. Congress, Office of Technology Assessment, *Information
Security and Privacy in Network Environments*, OTA-TCT-606,
U.S. Government Printing Office, Washington D.C., September
1994, Box 4-8, p. 165).

____________________________________________________________

      BOX 6.3 Similarities in Commercial Security Needs
                 and National Security Needs

   +    Strong aversion to public discussion of security
breaches. Information about threats is regarded as highly
sensitive. Such a classification makes it very difficult to
conduct effective user education, because security awareness
depends on an understanding of the true scope and nature of a
threat.

   +    Need to make cost-benefit trade-offs in using security
technology. Neither party can afford the resources to protect
against an arbitrary threat model.

   +    Strong preference for self-reliance (government
relying on government, industry relying on industry) to meet
security needs.

   +    Strong need for high security. Both government and
industry need strong cryptography with no limitations for
certain applications. However, the best technology and tools
are often reserved for government and military use because
commercial deployment cannot be adequately controlled,
resulting in opportunities for adversaries to obtain and
examine the systems so that they can plan how to exploit them.

   +    Increasing reliance on commercial products in many
domains (business, Third-World nations).

   +    Increasing scale and sophistication of the security
threat for businesses, which is now approaching that posed by
foreign intelligence services and foreign governments.

   +    Possibility that exceptional access to encrypted
information and data may become important to commercial
entities.

____________________________________________________________

      BOX 6.4 Differences in Commercial Security Needs
                 and National Security Needs

   +    Business wants market-driven cryptographic technology;
government is apprehensive about such technology. For example,
standards are a critical element of market-driven
cryptography. Market forces and the need to respond to rapidly
evolving dynamic new markets demand an approach to
establishing cryptographic standards; businesses want
standards for interoperability, and they want to create market
critical mass in order to lower the cost of cryptography.

   +    By its nature, the environment of business must
include potential adversaries within its security perimeter.
Commercial enterprises now realize that electronic delivery of
their products and services to their customers will increase.
They must design systems and processes explicitly so that
customers can enter into transactions with considerable ease.
Business strategies of today empower the customer through
software and technology. Enterprise networks have value in
allowing the maximum number of people to be attached to the
network. Customers will choose which enterprise to enter in
order to engage in electronic commerce, and making it
difficult for the customer will result in loss of business.
But adversaries masquerading as customers (or who indeed may
be customers themselves) can enter as well. By contrast, the
traditional national security model keeps potential
adversaries outside the security perimeter, allowing access
only to those with a real need. However, to the extent that
U.S. military forces work in collaboration with forces of
other nations, the security perimeter for the military may
also become similarly blurred.

   +    Business paradigms value teamwork, openness, trust,
empowerment, and speed. Such values are often difficult to
sustain in the national security establishment. The cultures
of the two worlds are different and are reflected in, for
example, the unwillingness of business to use multilevel
security systems designed for military use. Such systems
failed the market test, although they met Defense Department
criteria for security.

   +    National security resources (personnel with
cryptographic expertise, funding) are much larger than the
resources in nondefense government sectors and in private
industry and universities. As a result, a great deal of
cryptographic knowledge resides within the world of national
security. Industry wants access to this knowledge to ensure
appropriate use of protocols and strong algorithms, and
development of innovative new products and services.

   +    National security places considerable emphasis on
confidentiality as well as on authentication and integrity.
Today's commercial enterprises stress authentication of users
and data integrity much more than they stress confidentiality
(although this balance may shift in the future). For example,
improperly denying a junior military officer access to a
computer facility may not be particularly important in a
military context, whereas improperly denying a customer access
to his bank account because of a faulty authentication can
pose enormous problems for the bank.

   +    While both businesses and national security
authorities have an interest in safeguarding secrets, the
tools available to businesses to discourage individuals from
disclosing secrets (generally civil suits) are less stringent
than those available to national security authorities
(criminal prosecution).

____________________________________________________________

[End Chapter 6]









[Head note all pages: May 30, 1996, Prepublication Copy
Subject to Further Editorial Correction]


                          Part III


        Policy Options, Findings and Recommendations


   Part III consists of two chapters. Chapter 7 considers a
wide range of policy options, ranging in scope and scale from
large to small. Not every item described in Chapter 7 has been
deemed worthy for adoption by the committee, but the committee
hopes to broaden the public understanding of cryptography
policy by discussing ideas that at least have the support of
respectable and responsible elements of the various
stakeholding communities.

   Chapter 8 is a synthesizing chapter that brings together
threads of the previous seven chapters and presents the
committee's findings and recommendations.

____________________________________________________________


                              7

                Policy Options for the Future


   Current national cryptography policy defines only one point
in the space of possible policy options. A major difficulty in
the public debate over cryptography policy has been incomplete
explanation of why the govermnent has rejected certain policy
options. Chapter 7 explores a number of possible alternatives
to current national cryptography policy, selected by the
committee either because they address an important dimension
of national cryptography policy or because they have been
raised by a particular set of stakeholders. Although in the
committee's judgment these alternatives deserve analysis, it
does not follow that they necessarily deserve consideration
for adoption. The committee's judgments about appropriate
policy options are discussed in Chapter 8.


         7.1 EXPORT CONTROL OPTIONS FOR CRYPTOGRAPHY


         7.1.1 Dimensions of Choice for Controlling
                 the Export of Cryptography

   An export control-regime -- a set of laws and regulations
governing what may or may not be exported under any specified
set of circumstances -- has many dimensions that can be
considered independently. These dimensions include:

   +    *The type of export license granted*. Three types of
export licenses are available:

        -- A general license, under which export of an item
        does not in general require prior government approval
        but nonetheless is tracked under an export
        declaration;

        -- A special license, under which prior government
        approval is required but which allows multiple and
        continuing transactions under one license validation;
        and

        -- An individual license, under which prior government
        approval is required for each and every transaction.

As a general rule, only individual licenses are granted for
the export of items on the U.S. Munitions List, which includes
"strong" cryptography.(1)

   +    *The strength of a product's cryptographic
capabilities*. Current policy recognizes the difference
between RC2/RC4 algorithms using 40-bit keys and other types
of cryptography, and places fewer and less severe restrictions
on the former.

   +    *The default encryption settings on the delivered
product*. Encryption can be tacitly discouraged, but not
forbidden, by the use of appropriate settings.(2)

   +    *The type of product*. Many different types of
products can incorporate encryption capabilities. Products can
be distinguished by medium (e.g., hardware vs. software)
and/or intended function (e.g., computer vs. communications).

   +    *The extent and nature of features that allow
exceptional access*. The Administration has suggested that it
would permit the export of encryption software with key
lengths of 64 bits or less if the keys were "properly
escrowed."(3) Thus, inclusion in a product of a feature for
exceptional access could be made one condition for allowing
the export of that product. In addition, the existence of
specific institutional arrangements (e.g., which specific
parties would hold the information needed to implement
exceptional access) might be made a condition for the export
of these products.

   +    *The ultimate destination or intended use of the
delivered product*. U.S. export controls have long
distinguished between exports to "friendly" and "hostile"
nations. In addition, licenses have been granted for the sale
of certain controlled products only when a particular benign
use (e.g., financial transactions) could be certified. A
related consideration is the extent to which nations cooperate
with respect to re-export of a controlled product and/or
export of their own products. For example, CoCom member
nations(4) in principle agreed to joint controls on the export
of certain products to the Eastern bloc; as a result, certain
products could be exported to CoCom member nations much more
easily than to other nations.

   At present, there are few clear guidelines that enable
vendors to design a product that will have a high degree of
assurance of being exportable (Chapters 4 and 6). Table 7.1
describes various mechanisms that might be used to manage the
export of products with encryption capabilities.

   This remainder of Section 7.1 describes a number of options
for controlling the export of cryptography, ranging from the
sweeping to the detailed.

----------

   (1)  However, as noted in Chapter 4, the current export
control regime for cryptography involves a number of
categorical exemptions as well as some uncodified
"in-practice" exemptions.

   (2)  Software, and even software-driven devices, commonly
have operational parameters that can be selected or set by a
user. An example is the fax machine that allows many user
choices to be selected by keyboard actions. The parameters
chosen by a manufacturer before it ships a product are
referred to as the "defaults" or "default condition." Users
are generally able to alter such parameters at will.

   (3)  As the time of this writing, the precise definition of
"properly escrowed" is under debate and review in the
Administration. The most recent language on this definition as
of December 1995 is provided in Chapter5.

   (4)  CoCom refers to the Coordinating Committee, a group of
Western nations (and Japan) that agreed to a common set of
export control practices during the Cold War to control the
export of militarily useful technologies to Eastern bloc
nations. CoCom was disbanded in March 1994, and a successor
regime known as the New Forum is being negotiated as this
report is being written.

____________________________________________________________


                7.1.2 Complete Elimination of
               Export Controls on Cryptography

   The complete elimination of export controls (both the USML
and the Commerce Control List controls) on cryptography is a
proposal that goes beyond most made to date, although
certainly such a position has advocates. If export controls on
cryptography were completely eliminated, it is possible that
within a short time, most information technology products
exported from the United States would have encryption
capabilities. It would be difficult for the U.S. government to
influence the capabilities of these products, or even to
monitor their deployment and use worldwide, because numerous
vendors would most probably be involved.

   Note, however, that the simple elimination of U.S. export
controls on cryptography does not address the fact that other
nations may have import controls and/or restrictions on the
use of cryptography internally. Furthermore, it takes time to
incorporate products into existing infrastructures, and slow
market growth may encourage some vendors to take their time in
developing new products. Thus, simply eliminating U.S. export
controls on cryptography would not ensure markets abroad for
U.S. products with encryption capabilities; indeed, the
elimination of U.S. export controls could in itself stimulate
foreign nations to impose import controls more stringently.
Appendix G contains more discussion of these issues.

   The worldwide removal of all controls on the export,
import, and use of products with encryption capabilities would
likely result in greater standardization of encryption
techniques. Standardization brought about in this manner would
result in:

   +    Higher degrees of international interoperability of
these products;

   +    Broader use, or at least more rapid spread, of
encryption capabilities as the result of the strong
distribution capabilities of U.S. firms;

   +    Higher levels of confidentiality, as a result of
greater ease in adopting more powerful algorithms and longer
keys as standards; and

   +    Greater use of cryptography by hostile, criminal, and
unfriendly parties as they, too, begin to use commercial
products with strong encryption capabilities.

   On the other hand, rapid, large-scale standardization would
be unlikely unless a few integrated software products with
encryption capabilities were able to achieve worldwide usage
very quickly. Consider, for example, that although there are
no restrictions on domestic use of cryptography in the United
States, interoperability is still difficult, in many cases
owing to variability in the systems in which the cryptography
is embedded. Likewise, many algorithms stronger than DES are
well known, and there are no restrictions in place on the
domestic use of such algorithms, and yet only DES even
remotely approaches common usage (and not all DES-based
applications are interoperable).

   For reasons well articulated by the national security and
law enforcement communities (see Chapter 3) and accepted by
the committee, the complete elimination of export controls on
products with encryption capabilities does not seem reasonable
in the short term. Whether export controls will remain
feasible and efficacious in the long-term has yet to be seen,
although clearly, maintaining even their current level of
effectiveness will become increasingly difficult.


         7.1.3 Transfer of All Cryptography Products
                to the Commerce Control List

   As discussed in Chapter 4, the Commerce Control List (CCL)
complements the U.S. Munitions List (USML) in controlling the
export of cryptography. (Box 4.2 in Chapter 4 describes the
primary difference between the USML and the CCL.) In 1994,
Representative Maria Cantwell (D-Washington) introduced
legislation to transfer all massmarket software products
involving cryptographic functions to the CCL. Although this
legislation never passed, it resulted in the promise and
subsequent delivery of an executive branch report on the
international market for computer software with encryption.(5)

   The Cantwell bill was strongly supported by the software
industry because of the liberal consideration afforded
products controlled for export by the CCL. Many of the bill's
advocates believed that a transfer of jurisdiction to the
Commerce Department would reflect an explicit recognition of
cryptography as a commercial technology that should be
administered under a dual-use export control regime. Compared
to the USML, they argued that the CCL is a more balanced
regime that still has considerable effectiveness in limiting
exports to target destinations and end users.

   On the other hand, national security officials regard the
broad authorities of the Arms Export Control Act (AECA) as
essential to the effective control of encryption exports. The
AECA provides authority for case-by-case regulation of exports
of cryptography to all destinations, based on national
security considerations. In particular, licensing decisions
are not governed by factors such as the country of
destination, end users, end uses, or the existence of
bilateral or multilateral agreements that often limit the
range of discretionary action possible in controlling exports
pursuant to the Export Administration Act. Further, the
national security provisions of the AECA provide a basis for
classifying the specific rationale for any particular export
licensing decision made under its authority, thus protecting
what may be very sensitive information about the particular
circumstances surrounding that decision.

   Although sympathetic to the Cantwell bill's underlying
rationale, the committee believes that the Cantwell bill does
not address the basic dilemma of cryptography policy. As
acknowledged by some of the bill's supporters, transfer of a
product's jurisdiction to the CCL does not mean automatic
decontrol of the product, and national security authorities
could still have considerable input into how exports are
actually licensed. In general, the committee believes that the
idea of split jurisdiction, in which some types of
cryptography are controlled under the CCL and others under the
USML, makes considerable sense given the various national
security implications of widespread use of encryption.
However, where the split should be made is a matter of
discussion; the committee expresses its own judgments on this
point in Chapter 8.

----------

   (5)  U.S. Department of Commerce and National Security
Agency, *A Study of the International Market for Computer
Software with Encryption*, prepared for the Interagency
Working Group on Encryption and Telecommunications Policy,
undated (released on January 11, 1996, by the U.S. Department
of Commerce, Office of the Secretary).

____________________________________________________________


                 7.1.4 End-use Certification

   Explicitly exempted under the current International Traffic
in Arms Regulations (ITAR) is the export of cryptography for
ensuring the confidentiality of financial transactions,
specifically for cryptographic equipment and software that are
"specially designed, developed or modified for use in machines
for banking or money transactions, and restricted to use only
in such transactions."(6) In addition, according to senior
National Security Agency (NSA) officials, cryptographic
systems, equipment, and software are in general freely
exportable for use by U.S.-controlled foreign companies and to
banking and financial institutions for purposes other than
financial transactions, although NSA regards these approvals
as part of the case-by-case review associated with equipment
and products that do not enjoy an explicit exemption in the
ITAR.

   In principle, the ITAR could explicitly exempt products
with encryption capabilities for use by foreign subsidiaries
of U.S. companies, foreign companies that are U.S.controlled,
and banking and financial institutions. Explicit "vertical"
exemptions for these categories could do much to alleviate
confusion among users, many of whom are currently uncertain
about what cryptographic protection they may be able to use in
their international communications, and could enable vendors
to make better informed judgments about the size of a given
market.

   Specific vertical exemptions could also be made for
different industries (e.g., health care or manufacturing) and
perhaps for large foreign-owned companies that would be both
the largest potential customers and the parties most likely to
be responsible corporate citizens. Inhibiting the diversion to
other uses of products with encryption capabilities sold to
these companies could be the focus of explicit contractual
language binding the recipient to abide by certain terms that
would be required of any vendor as a condition of sale to a
foreign company, as it is today under USML procedures under
the ITAR. Enforcement of end-use restrictions is discussed in
Chapter 4.

----------

   (6)  International Traffic in Arms Regulations, Section
121.1, Category XIII (b)(1)(ii).

____________________________________________________________


        7.1.5 Nation-by-Nation Relaxation of Controls
     and Harmonization of U.S. Export Control Policy on
               Cryptography with Export/Import
                  Policies of Other Nations

   The United States could give liberal export consideration
to products with encryption capabilities intended for sale to
recipients in a select set of nations,(7) exports to nations
outside this set would be restricted. Nations in the select
set would be expected to have a more or less uniform set of
regulations to control the export of cryptography, resulting
in a more level playing field for U.S. vendors. In addition,
agreements would be needed to control the re-export of
products with encryption capabilities outside this set of
nations.

   Nation-by-nation relaxation of controls is consistent with
the fact that different countries generally receive different
treatment under the U.S. export control regime for military
hardware. For example, exports of U.S. military hardware have
been forbidden to some countries because they were terrorist
nations, and to others because they failed to sign the nuclear
nonproliferation treaty. A harmonization of export control
regimes for cryptography would more closely resemble the
former CoCom approach to control dual-use items than the
approach reflected in the unilateral controls on exports
imposed by the USML.

   From the standpoint of U.S. national security and foreign
policy, a serious problem with harmonization is the fact that
the relationship between the United States and almost all
other nations has elements of both competition and cooperation
that may change over time. The widespread use of U.S. products
with strong encryption capabilities under some circumstances
could compromise U.S. positions with respect to these
competitive elements, although many of these nations are
unlikely to use U.S. products with encryption capabilities for
their most sensitive communications.

   Finally, as is true for other proposals to liberalize U.S.
export controls on cryptography, greater liberalization may
well cause some other nations to impose import controls where
they do not otherwise exist. Such an outcome would shift the
onus for impeding vendor interests away from the U.S.
government; however, depending on the nature of the resulting
import controls, U.S. vendors of information technology
products with encryption capabilities might be faced with the
need to conform to a multiplicity of import control regimes
established by different nations.

----------

   (7)  For example, products with encryption capabilities can
be exported freely to Canada without the need of a USML export
license if intended for domestic Canadian use.

____________________________________________________________


                  7.1.6 Liberal Export for
           Strong Cryptography with Weak Defaults

   An export control regime could grant liberal export
consideration to products with encryption capabilities
designed in such a way that the defaults for usage result in
weak or non-existent encryption (Box 7.1), but also so that
users could invoke options for stronger encryption through an
affirmative action.

   For example, such a product might be a telephone designed
for end-to-end security. The default mode of operation could
be set in two different ways. One way would be for the
telephone to establish a secure connection if the called party
has a comparable unit. The second way would be for the
telephone always to establish an insecure connection;
establishing a secure connection would require an explicit
action by the user. All experience suggests that the second
way would result in far fewer secure calls than the first
way.(8)

   An export policy favoring the export of encryption products
with weak defaults benefits the information-gathering needs of
law enforcement and signals intelligence efforts because of
user psychology. Many people, criminals and foreign government
workers included, often make mistakes by using products "out
of the box" without any particular attempt to configure them
properly. Such a policy could also take advantage of the
distribution mechanisms of the U.S. software industry to
spread weaker defaults.

   Experience to date suggests that good implementations of
cryptography for confidentiality are transparent and automatic
and thus do not require positive user action. Such
implementations are likely to be chosen by organizations that
are most concerned about confidentiality and that have a staff
dedicated to ensuring confidentiality (e.g., by resetting weak
vendor-supplied defaults). End users that obtain their
products with encryption capabilities on the retail store
market are the most likely to be affected by this proposal,
but such users constitute a relatively small part of the
overall market.

----------

   (8)  Of course, other techniques can be used to further
discourage the use of secure modes. For example, the telephone
could be designed to force the user to wait several seconds
for establishment of the secure mode.

____________________________________________________________


                  7.1.7 Liberal Export for
      Cryptographic Applications Programming Interfaces

   A cryptographic applications programming interface (CAPI;
see Appendix K) is a well-defined boundary between a baseline
product (such as an operating system, a database management
program, or a word-processing program) and a cryptography
module that provides a secure set of cryptographic services
such as authentication, digital signature generation, random
number generation, and stream or block mode encryption. The
use of a CAPI allows vendors to support cryptographic
functions in their products without actually providing them at
distribution.

   Even though such products have no cryptographic
functionality per se and are therefore not specifically
included in Category XIII of the ITAR (see Appendix L),
license applications for the export of products incorporating
CAPIs have in general been denied. The reason is that strong
cryptographic capabilities could be deployed on a vast scale
if U.S. vendors exported applications supporting a common CAPI
and a foreign vendor then marketed an add-in module with
strong encryption capabilities.(9)

   To meet the goals of less restrictive export controls,
liberal export consideration could be given to products that
incorporate a CAPI designed so that only "certified"
cryptographic modules could be incorporated into and used by
the application. That is, the application with the CAPI would
have to ensure that the CAPI would work only with certified
cryptographic modules. This could be accomplished by
incorporating into the application a check for a digital
signature whose presence would indicate that the add-on
cryptographic module was indeed certified; if and only if such
a signature were detected by the CAPI would the product allow
use of the module.

   One instantiation of a CAPI is the CAPI built into
applications that use the Fortezza card (discussed in Chapter
5). CAPI software for Fortezza is available for a variety of
operating systems and PC-card reader types; such software
incorporates a check to ensure that the device being used is
itself a Fortezza card. The Fortezza card contains a private
Digital Signature Standard (DSS) key that can be used to sign
a challenge from the workstation. The corresponding DSS public
key is made available in the CAPI, and thus the CAPI is able
to verify the authenticity of the Fortezza card.

   A second approach to the use of a CAPI has been proposed by
Microsoft and is now eligible for liberal export consideration
by the State Department (Box 7.2). The Microsoft approach
involves three components: an operating system with a CAPI
embedded within it, modules providing cryptographic services
through the CAPI, and applications that can call on the
modules through the CAPI provided by the operating system. In
principle, each of these components is the responsibility of
different parties: Microsoft is responsible for the operating
system, cryptography vendors are responsible for the modules,
and independent applications vendors are responsible for the
applications that run on the operating system.

   From the standpoint of national security authorities, the
effectiveness of an approach based on the use of a certified
CAPI/module combination depends on a number of factors. For
example, the product incorporating the CAPI should be known to
be implemented in a manner that enforces the appropriate
constraints on crypto-modules that it calls; furthermore, the
code that provides such enforcement should not be trivially
bypassed. The party certifying the crypto-module should
protect the private signature key used to sign it. Vendors
would still be required to support domestic and exportable
versions of an application if the domestic version was allowed
to use any module while the export version was restricted in
the set of modules that would be accepted, although the amount
of effort required to develop these two different versions
would be quite small.

   The use of CAPIs that check for appropriate digital
signatures would shift the burden for export control from the
applications or systems vendors to the vendors of the
cryptographic modules. This shift could benefit both the
government and vendors, because of the potential to reduce the
number of players engaged in the process. For example, all of
the hundreds of e-mail applications on the market could
quickly support encrypted e-mail by supporting a CAPI
developed by a handful of software and/or hardware
cryptography vendors. The cryptography vendors would be
responsible for dealing with the export and import controls of
various countries, leaving e-mail application vendors to
export freely anywhere in the world. Capabilities such as
escrowed encryption could be supported within the cryptography
module itself, freeing the applications or system vendor from
most technical, operational, and political issues related to
export control.

   A trustworthy CAPI would also help to support cryptography
policies that might differ among nations. In particular, a
given nation might specify certain performance requirements
for all cryptography modules used or purchased within its
borders.(10) International interoperability problems resulting
from conflicting national cryptography policies would still
remain.

----------

   (9)  This discussion refers only to "documented" or "open"
CAPIs, i.e., CAPls that are accessible to the end user.
Another kind of CAPI is "undocumented" and "closed", that is,
it is inaccessible to the end user, though it is used by
system developers for their own convenience. While a history
of export licensing decisions and practices supports the
conclusion that most products implementing "open" CAPls will
not receive export licenses, history provides no consistent
guidance with respect to products implementing CAPls that are
inaccessible to the end user.

   (10) An approach to this effect is the thrust of a proposal
from Hewlett-Packard. The Hewlett-Packard International
Cryptography Framework (ICF) proposal includes a stamp size
"policy card" (smart card) that would be inserted into a
cryptographic unit that is a part of a host system.
Cryptographic functions provided within the cryptographic unit
could be executed only with the presents of a valid policy
card. The policy card could be configured to enable only those
cryptographic functions that are consistent with government
export and local policies. The "policy card" allows for
managing the use of the integrated cryptography down to the
application specific level. By obtaining a new policy card,
customers could be upgraded to take advantage of varying
cryptographic capabilities as government policies or
organizational needs change. As part of an ICF solution, a
network security server could be implemented to provide a
range of different security services including verification of
the other three service elements (the card, the host system,
the cryptographic unit). Sources: Carl Snyder,
Hewlett-Packard, testimony to the NRC committee in February
1995; Hewlett-Packard, *International Cryptography Framework
White Paper*, February 1994.

____________________________________________________________


                  7.1.8 Liberal Export for
      Escrowable Products with Encryption Capabilities

   As discussed in Chapter 5, the Administration's proposal of
August 17, 1995, would allow liberal export consideration for
software products with encryption capabilities whose keys are
"properly escrowed." In other words, strong cryptography would
be enabled for these products only when the keys were escrowed
with appropriate escrow agents.

   An escrowed encryption product differs from what might be
called an "escrowable" product. Specifically, an escrowed
encryption product is one whose key must be escrowed with a
registered, approved agent before the use of (strong)
cryptography can be enabled, whereas an escrowable product is
one that provides full cryptographic functionality that
includes optional escrow features for the user. The user of an
escrowable product can choose whether or not to escrow the
relevant keys, but regardless of the choice, the product still
provides its full suite of encryption capabilities.(11)

   Liberal export consideration for escrowable products could
be granted and incentives promulgated to encourage the use of
escrow features. While the short-term disadvantage of this
approach from the standpoint of U.S. national security is that
it allows encryption stronger than the current 40-bit RC2/RC4
encryption allowed under present regulations to diffuse into
foreign hands, it has the long-term advantage of providing
foreign governments with a tool for influencing or regulating
the use of cryptography as they see fit. Currently, most
products with encryption capabilities do not have built-in
features to support escrow built into them. However, if
products were designed and exported with such features,
governments would have a hook for exercising some influence.
Some governments might choose to require the escrowing of
keys, while others might simply provide incentives to
encourage escrowing. In any event, the diffusion of escrowable
products abroad would raise the awareness of foreign
governments, businesses, and individuals about encryption and
thus lay a foundation for international cooperation on the
formulation of national cryptography policies.

----------

   (11) For example, an escrowable product would not enable
the user to encrypt files with passwords. Rather, the
installation of the product would require the user to create
a key or set of named keys, and these keys would be used when
encrypting files. The installation would also generate a
protected "safe copy" of the keys with instructions to the
user that they should register the key "somewhere." It would
be up to the user to decide where or whether to register the
key.

____________________________________________________________


       7.1.9 Alternatives to Government Certification
                   of Escrow Agents Abroad

   As discussed in Chapter 5, the Administration's August 1995
proposal focuses on an implementation of escrowed encryption
that involves the use of "escrow agents certified by the U.S.
government or by foreign governments with which the U.S.
government has formal agreements consistent with U.S. law
enforcement and national security requirements."(12) This
approach requires foreign customers of U.S. escrowed
encryption products to use U.S. escrow agents until formal
agreements can be negotiated that specify the responsibilities
of foreign escrow agents to the United States for law
enforcement and national security purposes.

   Skeptics ask what incentives the U.S. government would have
to conclude the formal agreements described in the August 1995
proposal if U.S. escrow agents would, by default, be the
escrow agents for foreign consumers. They believe that the
most likely result of adopting the Administration's proposal
would be U.S. foot-dragging and inordinate delays in the
consummation of formal agreements for certifying foreign
escrow agents. Appendix G describes some of the U.S.
government efforts to date to promote a dialogue on such
agreements.

   The approaches described below address problems raised by
certifying foreign escrow agents:

   +    *Informal arrangements for cooperation*. One
alternative is based on the fact that the United States enjoys
strong cooperative law enforcement relationships with many
nations with which it does not have formal agreements
regarding cooperation. Negotiation of a formal agreement
between the United States and another nation could be replaced
by presidential certification that strong cooperative law
enforcement relationships exist between the United States and
that nation. Subsequent cooperation would be undertaken on the
same basis that cooperation is offered today.

   +    *Contractual key escrow*. A second alternative is
based on the idea that formal agreements between nations
governing exchange of escrowed key information might be
replaced by private contractual arrangements.(13) A user that
escrows key information with an escrow agent, wherever that
agent is located, would agree contractually that the U.S.
government would have access to that information under a
certain set of carefully specified circumstances. A suitably
designed exportable product would provide strong encryption
only upon receipt of affirmative confirmation that the
relevant key information had been deposited with escrow agents
requiring such contracts with users. Alternatively, as a
condition of sale, end users could be required to deposit keys
with escrow agents subject to such a contractual requirement.

----------

   (12)  See Box 5.3, Chapter 5.

   (13)  Henry Perritt, "Transnational Key Escrow," paper
presented at the International Cryptography Institute,
Washington, D.C., September 22, 1995.

____________________________________________________________


           7.1.10 Use of Differential Work Factors
                       in Cryptography

   Differential work factor cryptography is an approach to
cryptography that presents different work factors to different
parties attempting to cryptanalyze a given piece of encrypted
information.(14) Iris Associates, the creator of Notes,
proposed such an approach for Lotus Notes Version 4 to
facilitate its export, and the U.S. govermnent has accepted
it. Specifically, the international edition of Lotus Notes
Version 4 is designed to present a 40-bit work factor to the
U.S. government and a 64-bit work factor to all other parties.
It implements this differential work factor by encrypting 24
bits of the 64-bit key with the public-key portion of an RSA
key pair held by the U.S. government. Because the U.S.
government can easily decrypt these 24 bits, it faces only a
40-bit work factor when it needs access to a communications
stream overseas encrypted by the international edition. All
other parties attempting to cryptanalyze a message face a
64-bit work factor.

   Differential work factor cryptography is similar to partial
key escrow (described in Chapter 5) in that both provide very
strong protection against most attackers but are vulnerable to
attack by some specifically chosen authority. However, they
are different in that differential work factor cryptography
does not require user interaction with an escrow agent, and so
it can offer strong cryptography "out of the box." Partial key
escrow offers all of the strengths and weaknesses of escrowed
encryption, including the requirement that the enabling of
strong cryptography does require interaction with an escrow
agent.

----------

   (14) Recall from Chapter 2 that a work factor is a measure
of the amount of work that it takes to undertake a brute-force
exhaustive cryptanalytic search.

____________________________________________________________


           7.1.11 Separation of Cryptography from
           other Items on the U.S. Munitions List

   As noted in Chapter 4, the inclusion of products with
encryption capabilities on the USML puts them on a par with
products intended for strictly military purposes (e.g., tanks,
missiles). An export control regime that authorized the U.S.
government to separate cryptography -- a true dual-use
technology -- from strictly military items would provide much
needed flexibility in dealing with nations on which the United
States wishes to place sanctions.


          7.2 ALTERNATIVES FOR PROVIDING GOVERNMENT
            EXCEPTIONAL ACCESS TO ENCRYPTED DATA


   Providing government exceptional access to encrypted data
is an issue with a number of dimensions, only some of which
relate directly to encryption.


           7.2.1 A Prohibition of the Use and Sale
   of Cryptography Lacking Features for Exceptional Access

   One obvious approach to ensuring government exceptional
access to encrypted information is to pass legislation that
forbids the use of cryptography lacking features for such
access, presumably with criminal penalties attached for
violation. (Given that escrowed cryptography appears to be the
most plausible approach to providing govermnent exceptional
access, the term "unescrowed cryptography" is used here as a
synonym for cryptography without features for exceptional
access.) Indeed, opponents of the Escrowed Encryption Standard
(EES) and the Clipper chip have argued repeatedly that the EES
approach would succeed only if alternatives were banned.(15)
Many concerns have been raised about the prospect of a
mandatory prohibition on the use of unescrowed cryptography.

   From a law enforcement standpoint, a legislative
prohibition on the use of unescrowed encryption would have
clear advantages. Its primary impact would be to eliminate the
commercial supply of unescrowed products with encryption
capabilities -- vendors without a market would most likely not
produce or distribute such products, thus limiting access of
criminals to unescrowed encryption and increasing the
inconvenience of evading a prohibition on use of unescrowed
encryption. At the same time, such a prohibition would leave
law-abiding users with strong concerns about the
confidentiality of their information being subject to
procedures beyond their control.

   A legislative prohibition of the use of unescrowed
encryption also raises specific technical, economic, and legal
issues.


Concerns About Personal Freedom

   The Clinton Administration has stated that it has no
intention of outlawing unescrowed cryptography, and it has
repeatedly and explicitly disavowed any intent to regulate the
domestic use of cryptography. However, no administration can
bind future administrations (a fact freely acknowledged by
administration officials). Thus, some critics of the
Administration position believe that the dynamics of the
encryption problem may well drive the government -- sooner or
later -- to prohibit the use of encryption without government
access.(16) The result is that the Administration is simply
not believed when it forswears any intent to regulate
cryptography used in the United States. Two related concerns
are raised:

   +    *The "slippery slope.*" Many skeptics fear that
current cryptography policy is the first step down a slippery
slope toward a more restrictive policy regime under which
government may not continue to respect limits in place at the
outset. An oft-cited example is current use of the Social
Security Number, which was not originally intended to serve as
a universal identifier when the Social Security Act was passed
in 1935 but has, over the last 50 years, come to serve exactly
that role by default, simply because it was there to be
exploited for purposes not originally intended by the enabling
legislation.

   +    *Misuse of deployed infrastructure for cryptography*.
Many skeptics are concerned that a widely deployed
infrastructure for cryptography could be used by a future
administration or Congress to promulgate and/or enforce
restrictive policies regarding the use of cryptography. With
such an infrastructure in place, critics argue that a simple
policy change might be able to transform a comparatively
benign deployment of technology into an oppressive one. For
example, critics of the Clipper proposal were concerned about
the possibility that a secure telephone system with government
exceptional access capabilities could, under a strictly
voluntary program to encourage its purchase and use, achieve
moderate market penetration. Such market penetration could
then facilitate legislation outlawing all other
cryptographically secure telephones.(17)

   Adding to these concerns are suggestions such as those made
by a responsible and senior government official that even
research in cryptography conducted in the civilian sector
should be controlled in a legal regime similar to that which
governs research with relevance to nuclear weapons design (Box
7.3). Ironically, former NSA Director Bobby Inman's comments
on scientific research appeared in an article that called for
greater cooperation between academic scientists and national
security authorities and used as a model of cooperation an
arrangement, recommended by the Public Cryptography Study
Group, that has worked generally well in balancing the needs
of academic science and those of national security.(18)
Nevertheless, Inman's words are often cited as reflecting a
national security mind-set that could lead to a serious loss
of intellectual freedom and discourse. More recently, FBI
Director Louis Freeh stated to the committee that "other
approaches may be necessary" if technology vendors do not
adopt escrowed encryption on their own. Moreover, the current
Administration has explicitly rejected the premise that "every
American, as a matter of right, is entitled to an unbreakable
encryption product."(19)

   Given concerns about possible compromises of personal and
civil liberties, many skeptics of government in this area
believe that the safest approach is for government to stay out
of cryptography policy entirely. They argue that any steps in
this area, no matter how well intentioned or plausible or
reasonable, must be resisted strongly, because such steps will
inevitably be the first poking of the camel's nose under the
tent.


Technical Issues

   Even if a legislative prohibition on the use of unescrowed
encryption were enacted, it would be technically easy for
parties with special needs for security to circumvent such a
ban. In some cases, circumvention would be explicitly illegal,
while in others it might well be entirely legal. For example:

   +    Software for unescrowed encryption can be downloaded
from the Internet; such software is available even today. Even
if posting such software in the United States were to be
illegal under a prohibition, it would nonetheless be
impossible to prevent U.S. Internet users from downloading
software that had been posted on sites abroad.

   +    Superencryption can be used. Superencryption
(sometimes also known as double encryption) is encryption of
traffic before it is given to an escrowed encryption device or
system. For technical reasons, superencryption is impossible
to detect without monitoring and attempting to decrypt all
escrow-encrypted traffic, and such large-scale monitoring
would be seriously at odds with the selected and limited
nature of wiretaps today.

   An additional difficulty with superencryption is that it is
not technically possible to obtain escrow information for all
layers simultaneously, because the fact of double and triple
encryption cannot be known in advance. Even if the second (or
third or fourth) layers of encryption were escrowed, law
enforcement authorities would have to approach separately and
sequentially the escrow agents holding key information for
those layers.

   +    Talent for hire is easy to obtain. A criminal party
could easily hire a knowledgable person to develop needed
software. For example, an out-of-work or underemployed
scientist or mathematician from the former Soviet Union would
find a retainer fee of $500 per month to be a king's
ransom.(20)

   +    Information can be stored remotely. An obvious
noncryptographic circumvention is to store data on a remote
computer whose Internet address is known only to the user.
Such a computer could be physically located anywhere in the
world (and might even automatically encrypt files that were
stored there). But even if it were not encrypted, data stored
on a remote computer would be impossible for law enforcement
officials to access without the cooperation of the data's
owner. Such remote storage could occur quite legally even with
a ban on the use of unescrowed encryption.

   +    Demonstrating that a given communication or data file
is "encrypted" is fraught with ambiguities arising from the
many different possibilities for sending information:

        -- An individual might use an obscure data format. For
        example, while ASCII is the most common representation
        of alphanumeric characters today, Unicode (a proposed
        16-bit representation) and EBCDIC (a more-or-less
        obsolete 8-bit representation) are equally good for
        sending plain English text.

        -- An individual talking to another individual might
        speak in a language such as Navajo.

        -- An individual talking to another individual might
        speak in code phrases.

        -- An individual might send compressed digital data
        that could easily be confused with encrypted data
        despite having no purpose related to encryption. If,
        for example, an individual develops his own good
        compression algorithm and does not share it with
        anyone, that compressed bit stream may prove as
        difficult to decipher as an encrypted bit stream.(21)

        -- An individual might deposit fragments of a text or
        image that he wished to conceal or protect in a number
        of different Internet-accessible computers. The
        plaintext (i.e., the reassembled version) would be
        reassembled into a coherent whole only when downloaded
        into the computer of the user.(22)

        -- An individual might use steganography.(23)

   None of these alternative coding schemes provides
confidentiality as strong as would be provided by good
cryptography, but their extensive use could well complicate
attempts by government to obtain plaintext information.

   Given so many different ways to subvert a ban on the use of
unescrowed cryptography, emergence of a dedicated subculture
is likely in which the nonconformists would use coding schemes
or unescrowed cryptography impenetrable to all outsiders.


Economic Concerns

   An important economic issue that would arise with a
legislative prohibition on the use of unescrowed cryptography
would involve the political difficulty of mandating
abandonment of existing user investments in products with
encryption capabilities. These investments, considerable even
today, are growing rapidly, and the expense to users of
immediately having to replace unescrowed encryption products
with escrowed ones could be enormous;(24) a further expense
would be the labor cost involved in decrypting existing
encrypted archives and reencrypting them using escrowed
encryption products. One potential mitigating factor for cost
is the short product cycle of information technology products.
Whether users would abandon nonconforming products in favor of
new products with escrowing features -- knowing that they were
specifically designed to facilitate exceptional access -- is
open to question.


Legal and Constitutional Issues

   Even apart from the issues described above, which in the
committee's view are quite significant, a legislative ban on
the domestic use of unescrowed encryption would raise
constitutional issues. Insofar as a prohibition on unescrowed
encryption were treated for constitutional purposes as a
limitation on the content of communications, the government
would have to come forward with a compelling state interest to
justify the ban. To some, a prohibition on the use of
unescrowed encryption would be the equivalent of a law
proscribing use of a language (e.g., Spanish), which would
almost certainly be unconstitutional. On the other hand, if
such a ban were regarded as tantamount to eliminating a method
of communication (i.e., were regarded as content-neutral),
then the courts would employ a simple balancing test to
determine its constitutionality. The government would have to
show that the public interests were jeopardized by a world of
unrestrained availability of encryption, and these interests
would have to be weighed against the free speech interests
sacrificed by the ban. It would also be significant to know
what alternative forms of methods of anonymous communication
would remain available with a ban and how freedom of speech
would be affected by the specific system of escrow chosen by
the government. These various considerations are difficult,
and in some cases impossible, to estimate in advance of
particular legislation and a particular case, but the First
Amendment issues likely to arise with a total prohibition on
the use of unescrowed encryption are not trivial.(25)

   A step likely to raise fewer constitutional problems, but
not eliminate them, is one that would impose restrictions on
the commercial sale of unescrowed products with encryption
capabilities.(26) Under such a regime, products with
encryption capabilities eligible for sale would have to
conform to certain restrictions intended to ensure public
safety, in much the same way that other products such as
drugs, automobiles, and meat must satisfy particular
government regulations. "Freeware" or home-grown products with
encryption capabilities would be exempt from such regulations
as long as they were used privately. The problem of
already-deployed products would remain, but in a different
form: new products would either interoperate or not
interoperate with existing already-deployed products. If
noninteroperability were required, users attempting to
maintain and use two noninteroperating systems would be faced
with enormous expenses. If interoperability were allowed, the
intent of the ban would be thwarted.

   Finally, any national policy whose stated purpose is to
prevent the use of unescrowed encryption preempts decision
making that the committee believes properly belongs to users.
As noted in Chapter 5, escrowed encryption reduces the level
of assured confidentiality in exchange for allowing controlled
exceptional access to parties that may need to retrieve
encrypted data. Only in a policy regime of voluntary
compliance can users decide how to make that trade-off. A
legislative prohibition of the use or sale of unescrowed
encryption would be a clear statement that law enforcement
needs for exceptional access to information clearly outweigh
user interests in having maximum possible protection for their
information, a position that has yet to be defended or even
publicly argued by any player in the debate.

----------

   (15) For example, see Electronic Privacy Information
Center, press release, August 16, 1995, available at
http://www.epic.org.

   (16) For example, Senator Charles Grassley (R-IA)
introduced legislation (The Anti-Electronic Racketeering Act
of 1995) on June 27, 1995, to "prohibit certain acts involving
the use of computers in the furtherance of crimes." The
proposed legislation makes it unlawful "to distribute computer
software that encodes or encrypts electronic or digital
communications to computer networks that the person
distributing the software knows or reasonably should know, is
accessible to foreign nationals and foreign governments,
regardless of whether such software has been designated as
nonexportable," except for software that uses "a universal
decoding device or program that was provided to the Department
of Justice prior to the distribution."

   (17) By contrast, a deployed infrastructure could have
characteristics that would make it quite difficult to
implement policy changes on a short time scale. For example,
it would be very difficult to implement a policy change that
would change the nature of the way in which people use today's
telephone system. Not surprisingly, policy makers would prefer
to work with infrastructures that are quickly responsive to
their policy preferences.

   (18) The arrangement recommended by the Public Cryptography
Study Group called for voluntary prepublication review of all
cryptography research undertaken in the private sector. For
more discussion of this arrangement, see Public Cryptography
Study Group, *Report of the Public Cryptography Study Group*,
American Council on Education, Washington, D.C., February,
1981. A history leading to the formation of the Public
Cryptography Study group can be found in National Research
Council, "Voluntary Restraints on Research With National
Security Implications: The Case of Cryptography, 1972-1982,"
in *Scientifc Communication and National Security*, National
Academy Press, Washington, D.C., 1982, Appendix E, pp.
120-125. The ACM study on cryptography policy concluded that
this prepublication arrangement has not resulted in any
chilling effects in the long term (see Susan Landau et al.,
*Codes, Keys and Conflicts: Issues in U.S. Crypto Policy*,
ACM, New York, 1994, p. 39.)

   (19) "Questions and Answers About the Clinton
Administration's Telecommunications Initiative," undated
document. Released on April 16, 1993, with the "Statement by
the Press Secretary on the Clipper Chip." See *The Third CPSR
Cryptography and Privacy Conference Source Book*, June 7,
1993, Part III.

   (20) Alan Cooperman and Kyrill Belianinov, "Moonlighting by
Modem in Russia," *U.S. News & World Report*, April 17, 1995,
pp. 45-48. In addition, many high-technology jobs are moving
overseas in general, not just to the former Soviet Union. See
for example, Keith Bradsher, "Skilled Workers Watch Their Jobs
Migrate Overseas," *New York Times*, August 28, 1995, p. 1.

   (21) A discussion of using text compression for
confidentiality purposes can be found in Ian Whitten and John
Cleary, "On the Privacy Afforded by Adaptive Text
Compression," *Computers and Security*, July 1988, Volume
7(4), pp. 397-408. One problem in using compression schemes as
a technique for ensuring confidentiality is that almost any
practical compression scheme has the characteristic that
closely similar plaintexts would generate similar ciphertexts,
thereby providing a cryptanalyst with a valuable advantage not
available if a strong encryption algorithm is used.

   (22) Jaron Lanier, "Unmuzzling the Internet: How to Evade
the Censors and Make a Statement, Too," OpEd, *New York
Times*, January 2, 1996, p. A-15.

   (23) Steganography is the name given to techniques for
hiding a message within another message. For example, the
first letter of each word in a sentence or a paragraph can be
used to spell out a message, or a photograph can be
constructed so as to conceal information. Specifically, most
black-and-white pictures rendered in digital form use at most
2^16 (65,536) shades of gray, because the human eye is
incapable of distinguishing any more shades. Each element of
a digitized black-and-white photo would then be associated
with 16 bits of information about what shade of gray should be
used. If a picture were digitized with 24 bits of gray scale,
the last 8 bits could be used to convey a concealed message
that would never appear except for someone who knew to look
for it. The digital size of the picture would be 50% larger
than it would ordinarily be, but no one but the creator of the
image would know.

   (24) Existing unescrowed encryption products could be kept
in place if end users could be made to comply with a
prohibition of the use of such products. In some cases, a
small technical fix might suffice to disable the cryptography
features of a system; such fixes would be most relevant in a
computing environment in which the software used by end users
is centrally administered (as in the case of many
corporations) and provides system administrators with the
capability for turning off encryption. In other cases, users
-- typically individual users who had purchased their products
from retail store outlets -- would have to be trusted to
refrain from using encryption.

   (25) For a view arguing that relevant Fourth and Fifth
Amendment issues would be resolved against a constitutionality
of such a prohibition, see Michael Froomkin, "The Metaphor Is
the Key: Cryptography, The Clipper Chip and the Constitution,"
*University of Pennsylvania Law Review*, Volume 143(3),
January 1995, pp. 709-897. The committee takes no position on
these Fourth and Fifth Amendment issues.

   (26) Such a scheme has been suggested by Dorothy Denning in
"The Future of Cryptography," *Internet Security Monthly*,
October 1995, p. 10. (Also available from
http://www.cosc.georgetown.edu/~denning/crypto.) Denning's
paper does not suggest that "freeware" be exempt, although her
proposal would provide an exemption for personally developed
software used to encrypt personal files.

____________________________________________________________


      7.2.2 Criminalization of the Use of Cryptography
                in the Commission of a Crime

   Proposals to criminalize the use of cryptography in the
commission of a crime have the advantage that they focus the
weight of the criminal justice system on the "bad guy" without
placing restrictions on the use of cryptography by "good
guys." Further, deliberate use of cryptography in the
commission of a crime could result in considerable damage,
either to society as a whole or to particular individuals, in
circumstances suggesting premeditated wrongdoing, an act that
society tends to view as worthy of greater punishment than a
crime committed in the heat of the moment.

   Two approaches could be taken to criminalize the use of
cryptography in the commission of a crime:

   +    Construct a specific list of crimes in which the use
of cryptography would subject the criminal to additional
penalties. For example, using a deadly weapon in committing a
robbery or causing the death of someone during the commission
of a crime are themselves crimes that lead to additional
penalties.

   +    Develop a blanket provision stating that the use of
cryptography for illegal purposes (or for purposes contrary to
law) is itself a felony.

   In either event, additional penalties for the use of
cryptography could be triggered by a conviction for a primary
crime, or they could be imposed independently of such a
conviction. Precedents include the laws criminalizing mail
fraud (fraud is a crime, generally a state crime, but mail
fraud -- use of the mails to commit fraud -- is an additional
federal crime) and the use of a gun during the commission of
a felony.

   Intentional use of cryptography in the concealment of a
crime could also be criminalized. Since the use of
cryptography is a prima facie act of concealment, such an
expansion would reduce the burden of proof on law enforcement
officials, who would have to prove only that cryptography was
used intentionally to conceal a crime. Providers of
cryptography would be criminally liable only if they had
knowingly provided cryptography for use in criminal activity.
On the other hand, a law of more expansive scope might well
impose additional burdens on businesses and raise civil
liberties concerns.

   In considering legal penalties for misuse of cryptography,
the question of what it means to "use" cryptography must be
addressed. For example, if and when encryption capabilities
are integrated seamlessly into applications and are invoked
automatically without effort on the part of a user, should the
use of these applications for criminal purposes lead to
additional penalties or to a charge for an additional offense?
Answering yes to this question provides another avenue for
prosecuting a criminal (recall that Al Capone was convicted
for income tax evasion rather than bank robbery). Answering no
leaves open the possibility of prosecutorial abuse. A second
question is what counts as "cryptography." As noted above in
the discussion of prohibiting unescrowed encryption, a number
of mathematical coding schemes can serve to obscure the
meaning of plaintext even if they are not encryption schemes
in the technical sense of the word. These and related
questions must be addressed in any serious consideration of
the option for criminalizing the use of cryptography in the
commission of a crime.


            7.2.3 Technical Non-Escrow Approaches
             for Obtaining Access to Information

   Escrowed encryption is not the only means by which law
enforcement can gain access to encrypted data. For example, as
advised by Department of Justice guidelines for searching and
seizing computers, law enforcement officials can approach the
software vendor or the Justice Department computer crime
laboratory for assistance in cryptanalyzing encrypted files.
These guidelines also advise that "clues to the password [may
be found] in the other evidence seized -- stray notes on
hardware or desks; scribble in the margins of manuals or on
the jackets of disks. Agents should consider whether the
suspect or someone else will provide the password if
requested."(27) Moreover, product designs intended to
facilitate exceptional access can include alternatives with
different strengths and weaknesses such as link encryption,
weak encryption, hidden back doors, and translucent
cryptography.


Link Encryption

   With link encryption, which applies only to communications
and stands in contrast to end-to-end encryption (Box 7.4), a
plaintext message enters a communications link, is encrypted
for transmission through the link, and is decrypted upon
exiting the link. In a communication that may involve many
links, sensitive information can be found in plaintext form at
the ends of each link (but not during transit). Thus, for
purposes of protecting sensitive information on an open
network accessible to anyone (the Internet is a good example),
link encryption is more vulnerable than end-to-end encryption,
which protects sensitive information from the moment it leaves
party A to the moment it arrives at party B. However, from the
standpoint of law enforcement, link encryption facilitates
legally authorized intercepts, because the traffic of interest
can always be obtained from one of the nodes in which the
traffic is unencrypted.

   On a relatively closed network or one that is used to
transmit data securely and without direct user action, link
encryption may be cost-effective and desirable. A good example
is encryption of the wireless radio link between a GSM
cellular telephone and its ground station; the cellular
handset encrypts the voice signal and transmits it to the
ground station, at which point it is decrypted and fed into
the land-based network. Thus, the landbased network carries
only unencrypted voice traffic, even though it was transmitted
by an encrypted cellular telephone. A second example is the
"bulk" encryption of multiple channels -- each individually
unencrypted -- over a multiplexed fiber-optic link. In both of
these instances of link encryption, only those with access to
carrier facilities -- presumably law enforcement officials
acting under proper legal authorization -- would have the
opportunity to tap such traffic.


Weak Encryption

   Weak encryption allowing exceptional access would have to
be strong enough to resist brute-force attack by unauthorized
parties (e.g., business competitors) but weak enough to be
cracked by authorized parties (e.g., law enforcement
agencies). However, "weak" encryption is a moving target. The
difference between cracking strong and weak encryption by
brute-force attack is the level of computational resources
that can be brought to such an attack, and those resources are
ever increasing. In fact, the cost of brute-force attacks on
cryptography drops exponentially over time, in accordance with
Moore's law.(28)

   Widely available technologies now enable multiple
distributed workstations to work collectively on a
computational problem at the behest of only a few people; Box
4.6 in Chapter 4 discusses the brute-force cryptanalysis of
messages encrypted with the 40-bit RC4 algorithm, and it is
not clear that the computational resources of unauthorized
parties can be limited in any meaningful way. In today's
environment, unauthorized parties will almost always be able
to assemble the resources needed to mount successful
brute-force attacks against weak cryptography, to the
detriment of those using such cryptography. Thus, any
technical dividing line between authorized and unauthorized
decryption would change rather quickly.


Hidden Back Doors

   A "back door" is an entry point to an application that
permits access or use by other than the normal or usual means.
Obviously, a back door known to government can be used to
obtain exceptional access. Back doors may be open or hidden.
An open back-door is one whose existence is announced
publicly; an example is an escrowed encryption system, which
everyone knows is designed to allow exceptional access.(29) By
its nature, an open back-door is explicit; it must be
deliberately and intentionally created by a designer or
implementer.

   A hidden back-door is one whose existence is not widely
known, at least upon initial deployment. It can be created
deliberately (e.g., by a designer who insists on retaining
access to a system that he may have created) or accidentally
(e.g., as the result of a design flaw). Often, a user wishing
access through a deliberately created hidden back-door must
pass through special system-provided authorization services.
Almost by definition, an accidentally created hidden back-door
requires no special authorization for its exploitation,
although finding it may require special knowledge. In either
case, the existence of hidden back-doors may or may not be
documented; frequently, it is not.

   Particularly harmful hidden back-doors can appear when
"secure" applications are implemented using insecure operating
systems; more generally, "secure" applications layered on top
of insecure systems may not be secure in practice.
Cryptographic algorithms implemented on weak operating systems
present another large class of back doors that can be used to
undermine the integrity and the confidentiality that
cryptographic implementations are intended to provide. For
example, a database application that provides strong access
control and requires authorization for access to its data
files but is implemented on an operating system that allows
users to view those files without going through the database
application does not provide strong confidentiality. Such an
application may well have its data files encrypted for
confidentiality.

   The existence of back doors can pose high-level risks. The
shutdown or malfunction of life-critical systems, loss of
financial stability in electronic commerce, and compromise of
private information in database systems can all have serious
consequences. Even if back doors are undocumented, they can be
discovered and misused by insiders or outsiders. Reliance on
"security by obscurity" is always dangerous, because trying to
suppress knowledge of a design fault is generally very
difficult. If a back door exists, it will eventually be
discovered, and its discoverer can post that knowledge
worldwide. If systems containing a discovered back door were
on the Internet or were accessible by modem, massive
exploitation could occur almost instantaneously, worldwide. If
back doors lack a capability for adequate authentication and
accountability, then it can be very difficult to detect
exploitation and to identify the culprit.


Translucent Cryptography

   Translucent cryptography has been proposed by Ronald Rivest
as an alternative to escrowed encryption.(30) The proposed
technical scheme, which involves no escrow of unit keys, would
ensure that any given message or file could be decrypted by
the government with probability p; the value of p (0