Declaration of Edward Felten

in Felten v. RIAA (Aug. 13, 2001)

Grayson Barber (GB 0034)
Grayson Barber, L.L.C.
68 Locust Lane
Princeton, NJ 08540
(609) 921-0391

Frank L. Corrado (FLC 9895)
Rossi, Barry, Corrado & Grassi, P.C.
2700 Pacific Avenue
Wildwood, NJ 08260
(609) 729-1333

(Additional Counsel listed on signature page)
Attorneys for Plaintiffs


ASSOCIATION, a Delaware non-profit
non-stock corporation,



   Hon. Garrett E. Brown, Jr.
   Case No. CV-01-2669 (GEB)
   Civil Action




his official capacity as ATTORNEY
DOES 1 through 4, inclusive,




  1. My name is Edward W. Felten. I am an Associate Professor of Computer Science at Princeton University, and I am Director of Princeton’s Secure Internet Programming Laboratory. I received my Ph.D. in Computer Science and Engineering from the University of Washington in 1993, and my B.S. in Physics from the California Institute of Technology in 1985. I have been on the faculty at Princeton for about eight years.

  2. My main area of research and teaching is computer security, and my other research interests include operating systems, computer networks, and Internet software.

  3. I have received a number of awards for my research, including a National Young Investigator award from the National Science Foundation, and an Alfred P. Sloan Foundation Fellowship. I have received Outstanding Paper or Best Paper awards at two conferences, including the most prestigious academic conferences on operating systems (in 1997) and computer system performance analysis (in 1995). I have given numerous special and invited talks at academic conferences.

  4. My research has been funded by government agencies, including the National Science Foundation and the Defense Advanced Research Projects Agency, and by industrial grants or gifts from IBM, Intel, Microsoft, Merrill Lynch, Sun Microsystems, Telcordia, and Trintech.

  5. My research has been covered extensively in the national press, even before the current matter came to public attention. I have been quoted or profiled on numerous occasions in publications such as the New York Times, the Washington Post, the Wall Street Journal, and Newsweek.

  6. I have been appointed to advisory boards and study panels by both industrial and governmental organizations. Sun Microsystems, Inc. appointed me to its Java Security Advisory Council, and I serve on Technical Advisory Boards for several other companies. The Institute for Defense Analyses, working in conjunction with the U.S. Department of Defense, chose me to serve in the Defense Science Study Group, and I obtained a U.S. “Secret” security clearance for that purpose. Finally, the National Research Council (which consists of the National Academy of Sciences, the National Academy of Engineering, and the Institute of Medicine) appointed me to its study committee on “Fundamentals of Computer Science.”

  7. I have worked extensively with law enforcement agencies. I assisted the local U.S. Attorney and the FBI with the “Melissa virus” case and a few other matters.

  8. I have also served as the primary computer science expert witness for the U.S. Department of Justice in the ongoing antitrust case involving Microsoft. In that capacity, I testified twice at trial and also filed a lengthy declaration in the remedy phase of that proceeding.

  9. I have published more than fifty papers in the research literature, and am the co-author of two books. This is the first time anyone has threatened to sue me because of something I wrote.

II.Audio watermark technology

  1. Several of the technologies at issue in this matter are “audio watermarking” technologies.

  2. Audio watermarking operates by putting a faint sound, known as a watermark, into the background of a piece of recorded music. The name “watermark” is an analogy to watermarks on paper, and audio watermarks are intended to be unobtrusive, readable, and indelible, as paper watermarks are.

  3. To be successful, an audio watermark scheme must have three technical characteristics. It must be:

    1. unobtrusive: Adding a watermark to a song must involve only changes that are inaudible, or nearly inaudible, to a human ear.

    2. readable: There must be a simple method for detecting the presence of a watermark, and if a watermark is present, for extracting the information stored in it.

    3. indelible: There must be no way for an adversary to remove the watermark without unacceptably damaging the audio quality of the song.

  4. Watermarking can be part of a technological system to protect recorded music from unauthorized access or copying. There are several strategies for doing this. Some of these strategies are more viable, from a technical standpoint, than others.

  5. All of these strategies rely on watermarking providing a way to attach some kind of message to a song, so that the message is inseparable from the recorded song. Sometimes the mere presence of a watermark is the entire message in itself; sometimes the message contains additional information encoded into the watermark.

  6. The Digital Millennium Copyright Act (“DMCA”) distinguishes between technologies that “control[] access” to a work, and technologies that “protect[] a right of a copyright owner under [copyright law]” of a work. Watermarking can be used as part of a system to control any type of use of a work: it can control access, or it can protect a right of a copyright owner, such as the right to redistribute a copy of the work. (Indeed, watermarking can be used as part of a system to enforce any restriction on use, including restrictions that have no basis in copyright law.)

  7. Since watermarks provide a way to attach data to an audio clip, and this data might be copyright management information (as defined in section 1202 of the DMCA), anyone who manipulates or removes a watermark could be removing copyright management information from a work.

  8. Verance promotes its watermarking technology as capable of controlling access, controlling copying, and carrying copyright management information. Verance’s web site states that the technology can be used to carry copyright management information and to control access:

In addition, Verance’s audio watermarks can carry and convey detailed information associated with the audio and audio-visual content for such purposes as monitoring and tracking its distribution and use as well as controlling access to and usage of the content.

(Verance web site at http://www.verance.com/verance/contentman/howitworks.html, visited June 14, 2001) Verance’s web site also states that the technology can be used to control copying:

Audio watermarking involves embedding a packet of additional digital data directly into the content signal….This watermarked data can contain: copy or usage rules of the content, owner, distributor, or recipient.

(Verance web site at http://www.verance.com/verance/technology/index.html, visited June 14, 2001)

III.The Secure Digital Music Initiative

  1. The Secure Digital Music Initiative (“SDMI”, also known as the “SDMI Foundation”) is a consortium of about 180 companies in industries such as music, consumer electronics, and software. The purpose of SDMI is “to develop open technology specifications that protect the playing, storing, and distributing of digital music…” (SDMI web site, http://www.sdmi.org, visited May 21, 2001)

  2. SDMI technology is being developed in a two-phase manner. According to SDMI,

Phase I commences with the adoption of the SDMI Specification and ends when Phase II begins. Phase II begins when a screening technology is available to filter out pirated music. During Phase I, SDMI compliant portable devices may accept music in all current formats, whether protected or unprotected. In the future when Phase II begins, consumers can upgrade to enjoy new music released in both protected SDMI compliant formats and in existing unprotected formats. For example, when consumers wish to download new music releases that include new SDMI technology, they will be prompted to upgrade their Phase I device to Phase II in order to play or copy that music. The upgrade will incorporate a screening technology that permits playback of all content except pirated copies of new music releases. In both phase I and phase II, consumers will be able to rip songs from their CDs and download unprotected music, just as they do now.

(SDMI press release, dated June 28, 1999)

  1. SDMI announced their Phase I standard on July 8, 1999, in a document entitled “SDMI Portable Device Specification, Part 1, Version 1.0.” (“Phase I Specification”) This document was amended slightly in September 1999. A shorter document entitled “Guide to the SDMI Portable Device Specification Part 1, V 1.0” (“Phase I Guide”) provides more explanation related to the specification.

  2. SDMI’s technology is designed to provide SDMI’s members with wide-ranging control over the use of recorded music. For example, the technology can control access, or it can control copying, or it can control other kinds of use.

  3. The Phase I Guide discusses uses of the technology to control access:

“The specification … provides sufficient flexibility to allow many new products and services to be developed…. For example, future music offerings may include try-before-you-buy, listening rights for a certain period of time, subscriptions, rent-to-own, etc.”

(Phase I Guide at page 4) These types of offerings require the technology to control access to the work.

  1. The Phase I Specification also discusses uses of the technology to control copying. For example, “Usage Rules include rules governing Copy (including number of copies/generations of copies permitted,...)” (Phase I Specification at page 9) Similarly, the Phase I Specification’s “SDMI Default Usage Rules” state that “The Local SDMI Environment shall contain no more than four usable copies. Three of these copies may be transferred [i.e., copied] to [portable devices]” (Phase I Specification at page 18)

IV.The SDMI Challenge

  1. On February 24, 2000, SDMI issued a “Call for Proposals for Phase II Screening Technology,” which is attached as Exhibit 1. This document, later amended, set out the process that SDMI would follow in choosing a Phase II technology. The process involved submission of candidate technologies by companies, and evaluation of those technologies by SDMI. One form of evaluation was a “public challenge:”

Solutions shall be resistant to malicious attacks and will be subject to malicious attack testing.

Malicious attack testing may take the form or one or more of:

  • Public Challenge: marked Content and commercially representative systems are made generally available under the usage model described in the Offer and everyone is invited to attempt to break Proponent’s technology.

All Proponents … will be required to specify in their Terms and Conditions Submission whether or not they are willing to subject their technologies to a Public Challenge.

(Call for Proposals for Phase II Screening Technology, at pp. 10-11; emphasis in original)

  1. In September 2000, SDMI announced a “public challenge” to evaluate six technologies that SDMI was reportedly considering for inclusion in its Phase II system. The challenge was announced in an “open letter” from SDMI to the public, and SDMI created a web site, www.hacksdmi.org, to coordinate the challenge process.

  2. The challenge lasted for three weeks, from September 15, 2000 to October 7, 2000. SDMI later extended the end of the challenge until noon PDT on October 8, 2000.

  3. Everyone was invited to participate in the challenge. According to a September 6, 2000, “Open Letter to the Digital Community,” from Leonardo Chiariglione, SDMI’s Executive Director (at the time),

We are now in the process of testing the technologies that will allow these protections. The proposed technologies must pass several stringent tests: they must be inaudible, robust, and run efficiently on various platforms, including PCs. They should also be tested by you.

So here’s the invitation: Attack the proposed technologies. Crack them.

(Exhibit A [filed with Complaint]; emphasis in original)

  1. The challenge provided six technologies, designated by the letters ‘A’ through ‘F’, for evaluation. Technologies A, B, C, and F were audio watermarking technologies; technologies D and E had other purposes.

  2. In order to participate in the challenge, participants were asked to agree to a “Click-Through Agreement” on SDMI’s web site. After clicking the “I Agree” button on the bottom of the Click-Through Agreement, participants would be given certain technical materials related to the challenge, including several music clips (the “Challenge Clips”). The text of the Click-Through Agreement, as captured from SDMI’s web site on October 3, 2000, was attached to the Complaint as Exhibit B.

  3. Challenge participants were also given access to an on-line “oracle” for each of the six technologies. Participants could submit files electronically to one of the oracles, and that oracle would return its evaluation of the submitted file. For example, for each of the watermark technologies, a participant could submit a file and the oracle would reply by saying (among other things) whether or not there was a detectable watermark in the submitted file.

  4. The Click-Through Agreement gave researchers a choice of either (a) signing an additional agreement with SDMI agreeing not to disclose the results of their research, and in exchange becoming eligible for a cash prize, or (b) forgoing the cash prize while retaining the right to publish.

Compensation of $10,000 will be divided among the persons who submit a successful unique attack on any individual technology during the duration of the SDMI Public Challenge. In exchange for such compensation, all information you submit, and any intellectual property in such information (including source code and other executables) will become the property of the SDMI Foundation and/or the proponent of that technology. In order to receive compensation, you will be required to enter into a separate agreement, by which you will assign your rights in such intellectual property. The agreement will provide that (1) you will not be permitted to disclose any information about the details of the attack to any other party, (2) you represent and warrant that the idea for the attack is yours alone and that the attack was not devised by someone else, and (3) you authorize us to disclose that you submitted a successful challenge. If you are a minor, it will be necessary for you and your parent or guardian to sign this document, and any compensation will be paid to your parent or guardian.

You may, of course, elect not to receive compensation, in which event you will not be required to sign a separate document or assign any of your intellectual property rights, although you are still encouraged to submit details of your attack.

(Click-Through Agreement)

  1. My co-authors and I chose to forgo the cash prize, in order to retain our right to publish our results. None of us signed the additional non-disclosure agreement, or any other agreement (other than the Click-Though Agreement) related to the challenge.

  2. Our motivation from the beginning was to do scientific research and publish our results. Had we believed that the Click-Through Agreement prohibited us from publishing our results, we would not have participated in the challenge.

  3. To my knowledge the behavior of all of the members of our research team has been consistent at all times with the Click-Through Agreement.

V.Our response to the challenge

  1. A few days after SDMI issued their challenge, I arranged a meeting at Princeton for researchers interested in the possibility of working on the challenge. I invited everyone in the Computer Science department, as well as several people from the Electrical Engineering (“EE”) department. Roughly twenty people came to the meeting. At the meeting, we discussed the SDMI challenge and what would be involved in working on it.

  2. As a result of this meeting, a group of five Princeton researchers emerged as having serious interest in the challenge. This group of five included me, Bede Liu (an EE professor), and three EE graduate students: Scott Craver, John P. McGregor, and Min Wu.

  3. At about the same time, I learned that Dan Wallach, a Computer Science professor at Rice University and a former student of mine, was interested in working on the challenge, along with two Rice students, Adam Stubblefield and Ben Swartzlander. I also learned that Drew Dean, a researcher at Xerox’s Palo Alto Research Center (“PARC”) who had worked closely with me while he was a student at Princeton, was interested in the challenge. I had several phone conversations with Prof. Wallach and Dr. Dean, and we agreed that the five Princeton researchers, the three Rice researchers, and Dr. Dean would all work together on the challenge as a single team.

VI.Our research

  1. During the challenge period, our group used standard methods of analysis to study the Challenge Clips. Different members of the group focused their attention on different challenge technologies, and the group engaged in general discussion of our results as they became available during the challenge period.

  2. For each of the four watermarking challenges, we analyzed the Challenge Clips that SDMI provided, and we submitted a series of music clips to the oracle. Our submissions to the oracle had two purposes: first, as experiments to characterize the oracle’s behavior; and second, as attempts to learn about the watermarking technology and to determine whether it could be defeated.

  3. Our experiments with the oracle determined at least two things. First, we submitted samples that we knew to have perfect audio quality but a detectable watermark, and the oracle rejected them. This confirmed that the oracle was rejecting submissions that had detectable watermarks. Second, we submitted samples that we knew to have no watermark but poor audio quality, and the oracle rejected them. This confirmed that the oracle was rejecting submissions that had poor audio quality. These facts confirm that our later submissions, which the oracle did not reject, did not contain a detectable watermark, and had passed an audio quality test. We double-checked the audio quality of these later submissions by listening to them ourselves.

  4. Matthew Oppenheim of SDMI, in a later phone conversation, confirmed for me that the oracle’s non-rejection message indeed meant that the submitted sample did not have a detectable watermark.

  5. We defined an attack on a watermarking technology as successful if that attack could succeed in removing the watermark without damaging the audio quality excessively. Our definition is the proper one from a scientific standpoint, because it is the criterion that determines whether a technology is able to prevent copyright infringement.

  6. SDMI may have used the term “successful” differently, to denote that a challenge participant had satisfied some unspecified set of procedural requirements related to qualification for the challenge’s cash prize. As we did not intend to apply for the cash prize, these procedural requirements did not apply to us, and we were not at all interested in whether we had met them.

  7. After the challenge period ended, SDMI invited us to participate in the next phase of the challenge. This new phase purported to test whether our attacks were repeatable on other music clips, but in fact its design did not test repeatability. In the first phase of the challenge, we had already been working under conditions much less favorable than a real would-be pirate would be facing; the second phase would be even less realistic. More important, the second phase was constructed so that we would receive no information of any kind by participating. Since the second phase had absolutely nothing to offer us as researchers, we chose not to participate in it.

  8. On October 23, 2000, I received an unexpected telephone call from Matthew Oppenheim of SDMI. The topic of discussion was the second phase of the SDMI Challenge. I told Mr. Oppenheim that we did not plan to seek the cash prize. I also told him that we did not plan to participate in the second round of the challenge, for the reasons detailed above. I offered to conduct a test of whether we could actually repeat our successful attacks from the challenge, by attacking other music clips under the same conditions as the original challenge. On behalf of SDMI, he declined this offer.

  9. The main conclusion of our technical analysis was that SDMI’s technologies were relatively weak, and would quickly be defeated if they were deployed. This result is of considerable interest to musicians, songwriters, and the public, since they are among the parties who would end up suffering if expensive but insecure technologies were deployed. Of course, our scientific colleagues also expressed great interest in our research.

  10. Our methods of analysis, and the results we obtained, are best described by the research paper that we wrote --- the paper that is at issue in this action.

VII.The paper

  1. At the end of the three-week challenge period, the bulk of our research had to stop, because the Click-Through Agreement prohibited us from making any use of the Challenge Clips outside of the challenge period. Our attention then turned to writing a paper describing our results.

  2. Writing the paper proved to be challenging, for several reasons. First, the paper had a large number of authors, many of whom had never worked together previously, or even met one another. Second, we had a large amount of material to describe in the paper. Third, we knew that readers would be looking to us for general discussion of the state of the art in watermarking technology, and we spent considerable time debating how strongly we should state our general skepticism about watermarking.

  3. Finally, our writing was hampered by the Click-Through Agreement’s prohibition on further research use of the Challenge Clips, and by the fact that the oracles were turned off at the end of the challenge period. Generally, when one writes a research paper about a set of experiments, one discovers during the writing process that there are small omissions or gaps in the experiments done so far. In normal practice one does a few small experiments during the writing process to clear up these issues. Because of the Click-Through Agreement and the shutdown of the oracles, we could not do these follow-up experiments, so we had to determine instead whether we could answer such questions indirectly using data we already had, or we had to figure out a way to work around the gaps without misleading our readers.

  4. Because of these factors, the paper took longer than expected to write, and it was only in late November 2000 that we finally had a complete draft. We submitted this draft to the organizers of the Fourth International Information Hiding Workshop (“IHW”) for review.

  5. Due to concerns about the Digital Millennium Copyright Act (“DMCA”) and Judge Kaplan’s decision in Universal v. Reimerdes, we chose not to include in the paper certain information that we otherwise would have liked to include. In particular, we wrote this version of the paper so that it did not contain any software code, pseudocode, or code-like descriptions of algorithms. Had we not imposed this constraint on ourselves, I believe we would have included some code in the paper, and the paper would have been better as a result.

  6. On February 23, 2001, we learned that the IHW organizers had accepted our paper for publication and presentation at the IHW conference. We received copies of the reviews written by the anonymous reviewers. The reviews were generally enthusiastic; one reviewer went so far as to describe the paper as a “tour de force.”

  7. We revised the paper in response to the reviewers’ suggestions and to incorporate some other changes that we felt would improve the paper, and in March 2001 we submitted this revised version of the paper to be included in the materials handed out at the IHW conference.

  8. Our goal in writing the paper was to communicate the useful scientific results of our work to our colleagues and to interested members of the public. The editorial choices we made in writing and revising the paper were motivated by this goal.

  9. Our paper was scheduled for presentation on April 26, 2001, at IHW in Pittsburgh.

  10. On April 20, 2001, someone leaked, to at least one public web site, the version of the paper that we had sent to IHW for review back in November 2000. I was not the source of the leak. I asked all of my co-authors whether they leaked the paper, and they all assured me that they did not.

VIII.Interaction with Verance

  1. In early November 2000, not long after our success in defeating the SDMI challenges had become public, I received an email message from Joseph Winograd, who identified himself as Executive Vice President and Chief Technologist at Verance Corporation. He stated that Verance had created one of the technologies under study in the challenge and he asked for information about what we had discovered regarding Verance’s technology. I understand that the Verance technology at issue is the one identified as “Technology A” in the SDMI Challenge. Dr. Winograd asked to speak with me on the phone about this topic.

  2. At the time I was attending a computer security conference in Greece. After returning home, I discussed this matter on the phone with Dr. Winograd. The conversation was cordial and we each stated our positions regarding the challenge and the prospects for future interaction between Verance and our group. I informed Dr. Winograd that the paper describing our results was not yet ready for publication, and I offered to send him a copy before it was published, in accordance with my normal practice with papers that discuss commercial technology.

  3. On March 30, 2001, I received another email message from Dr. Winograd requesting a pre-publication copy of the paper. Because the final version of the paper was ready, I responded the next day by sending him an electronic copy of the paper. I asked him not to circulate the paper outside of Verance, and he indicated in a later email that he would comply with that request.

  4. On April 6, 2001, I received an email from Dr. Winograd saying that he was “most concerned” about the contents of the paper, and asking me to engage in a dialogue about the paper’s contents. He also stated that he “did take the precautionary step of alerting the SDMI Foundation … and provid[ing] [them] with a brief general description of your paper's contents.”

IX.Threats of Legal Action Against Us

  1. On April 9, 2001, three days after Dr. Winograd said he had alerted SDMI, I received a letter from SDMI. The letter was on the letterhead of the Recording Industry Association of America (“RIAA”) and was signed by Matthew J. Oppenheim, Esq., RIAA’s Vice President for Legal Affairs; Mr. Oppenheim also identified himself as Secretary of the SDMI Foundation. A copy of this letter was attached to the Complaint as Exhibit C. I interpreted the letter as a threat to sue me, the other authors, and our respective employers if we proceeded with publication of the paper; and the other authors and their employers likewise responded to it as a threat.

  2. Because the letter from SDMI came so soon on the heels of Dr. Winograd’s communication with SDMI, and because the letter mentioned Verance and its commercial interests explicitly, I inferred at the time that Verance was involved in the effort to threaten us. Developments since that time have only confirmed this inference.

  3. Beginning on about April 11, I, along with lawyers for Princeton, Rice, Xerox, and Dr. Dean, engaged in a series of conversations with Mr. Oppenheim, Dr. Winograd, Mr. David Leibowitz (Chairman of the Board of Verance, and previously Executive Vice President and General Counsel of RIAA), and at least two outside lawyers working for Verance. To my knowledge I was involved in every conversation that took place between representatives of the authors and their employers on the one hand, and representatives of Verance, RIAA, and SDMI on the other.

  4. Verance clearly took the lead in these discussions. At least four representatives of Verance, including three lawyers, participated in these discussions, while only Mr. Oppenheim participated on behalf of RIAA and SDMI. Our conversations with Verance employees were much longer and more detailed than those with Mr. Oppenheim. Mr. Oppenheim did not participate in any conversations regarding technical issues or the content of the paper; indeed he excused himself from one conference call upon learning that that call would touch upon the technical material in the paper.

  5. On April 13, 2001, Howard Ende, Princeton University’s General Counsel, sent a letter to Mr. Oppenheim in response to Mr. Oppenheim’s original April 9 letter to me. Mr. Ende’s letter asked Mr. Oppenheim to clarify some of the statements he had made in his original letter. A copy of Mr. Ende’s letter is attached as Exhibit 2. Mr. Oppenheim never replied to Mr. Ende’s letter. Instead, we received a response to some of Mr. Ende’s questions from Verance’s lawyers, in a later conference call.

  6. At one point in the discussions, we (Princeton’s lawyers and I) asked RIAA, SDMI and Verance to designate jointly a technical representative to discuss with me whether a mutually acceptable version of our paper could be agreed upon. RIAA, SDMI, and Verance designated Dr. Winograd for this purpose.

  7. As a result, I carried out a dialogue, by email and telephone, with Dr. Winograd, including at least two one-on-one phone conversations with him, regarding the technical content of our paper and whether a mutually acceptable version could be agreed upon. The last of these conversations occurred on April 26, one day before the paper was to be presented at IHW, and lasted about an hour.

  8. Based on the circumstances, and on all of my preceding conversations, my understanding at the time was that if I agreed in this last conversation to publish only a version of the paper acceptable to Dr. Winograd, then the threatened lawsuit against us would be averted.

X.Verance’s Requested Changes to Our Paper

  1. During my discussions with Dr. Winograd, he expressed general concern about the effect of the information in our paper on Verance’s profits. At no time did he indicate that the paper was not truthful.

  2. The only specific indication he made regarding what changes in the paper would make it acceptable to Verance was in a document he sent me, entitled “Recommendations on ‘Reading Between the Lines: Lessons from the SDMI Challenge,’” which is attached as Exhibit 3. This document made twenty-five specific requests for changes to the paper, some of which were actually compound requests asking for the removal of several sections of text or entire diagrams.

  3. At no time did Dr. Winograd indicate that anything less than all of the requested deletions might be acceptable to Verance, despite my repeated requests for some flexibility from Verance.

  4. Verance’s “Recommendations” would have severely gutted the paper by removing virtually all of its detailed technical content.

  5. Based on my fifteen years as a professional researcher, my publication of at least fifty peer-reviewed papers, and my experience as a reviewer for dozens of scientific conferences and journals, I can state with confidence that Verance’s recommended version of the paper would have been rejected by a forum such as IHW, and most likely would have received scathing reviews. I would be embarrassed to submit such a paper to any respectable conference or journal.

XI.Our Decision to Withdraw the Paper

  1. As the scheduled date of our IHW presentation neared, the authors faced increasing pressure to withdraw the paper. Everyone I spoke to seemed to take the threat of litigation very seriously.

  2. The pressure on me was particularly intense. The threatening letter from Mr. Oppenheim had been addressed to me personally, and only to me. I was perceived as the leader among the paper’s authors, and I was the only author who participated in all, or even most, of the conversations with the parties who had threatened us.

  3. On April 19, 2001, I received an email message from IHW’s email address, signed by Dr. Ira Moskowitz, the Program Chair of IHW. (The Program Chair of an academic conference is in charge of the peer-review process and the choice of which papers will be presented.) The message said that Dr. Moskowitz had decided that he would remove our paper from IHW unless all parties certified, by close of business on April 23, that publication of the paper would be legal. This effectively gave SDMI, RIAA, and Verance veto power over publication of the paper.

  4. At about the same time, I received a late-night phone call at home from Dr. Moskowitz. Though he did not ask me to withdraw the paper, Dr. Moskowitz was clearly very worried because of the pressure that had been brought to bear on him and others involved in running IHW. I assured him that we would not proceed with publication if doing so would expose him or anyone else, against their will, to litigation.

  5. Dr. Moskowitz’s April 23 deadline came and went, and Defendants did not grant permission for the paper to be published, so the paper was removed from the IHW program.

  6. On the evening of April 23, IHW’s web site, which was the main public source of information about IHW, suddenly disappeared. Attempts to access the site were met with an error message indicating that the requested web page did not exist. Though the site had been hosted by the Naval Research Laboratory, which was Dr. Moskowitz’s employer, I understand that the site’s disappearance came as a surprise to the conference organizers, including Dr. Moskowitz.

  7. On April 24, I received email messages from Dr. Ross Anderson, a member of IHW’s Program Committee, and Dr. John McHugh, the General Chair of IHW, stating that Dr. Moskowitz’s decision to remove the paper from the program had been overruled by a vote of the full Program Committee, and that our paper was therefore reinstated to the IHW program.

  8. In the end I felt that proceeding with publication and presentation of the paper at IHW would be too risky, given the very credible threats of litigation against the authors, the conference organizers, and their respective employers. The other authors agreed, so we withdrew the paper from IHW.

  9. A few hours after we had withdrawn our paper, Mr. Oppenheim and SDMI issued a statement, which appeared on the RIAA web site. The statement said, “We sent the letter because we felt an obligation to the watermark licensees who had voluntarily submitted their valuable inventions to SDMI for testing.” (http://www.riaa.org/PR_Story.cfm?id=407, visited May 21, 2001).

  10. Mr. Oppenheim’s statement to the press also said that SDMI had never intended to sue us. As this statement was made only to the press, and it was phrased as a statement of their current intention (rather than as a promise to refrain from future action), I understood it as an attempt to “spin” the press, and not as a binding promise not to sue us. I would have expected any real retraction of RIAA’s and SDMI’s threats against us to be communicated to us, rather than to the press; but to my knowledge neither RIAA nor SDMI made any attempt to communicate with us until after we had brought this lawsuit.

  11. Verance did not join Mr. Oppenheim’s statement. To my knowledge they made no claim at the time, not even to the press, that they were backing away from their previous threats.

XII.Our Resubmission of the Paper

  1. After withdrawing our paper from IHW, we edited the paper slightly and on May 11, 2001, we submitted the edited version to the USENIX Security Symposium (“USec”).

  2. In preparing the paper for submission to USec, we added two sections of computer code to it, in order to improve the paper by making it easier for our readers to follow. As stated above in paragraph Due to concerns about the Digital Millennium Copyright Act (“DMCA”) and Judge Kaplan’s decision in Universal v. Reimerdes, we chose not to include in the paper certain information that we otherwise would have liked to include. In particular, we wrote this version of the paper so that it did not contain any software code, pseudocode, or code-like descriptions of algorithms. Had we not imposed this constraint on ourselves, I believe we would have included some code in the paper, and the paper would have been better as a result., we had originally forced ourselves to avoid including code in order to reduce the risk of DMCA-based threats against us. Given that such threats had occurred anyway even in the absence of the code, we felt that there was no longer any reason to censor ourselves.

  3. Although the submission deadline for USec had passed, the USec organizers agreed to give our paper expedited reviewing, given the unusual circumstances.

  4. On May 23, 2001, Dr. Aviel Rubin, on behalf of the USec organizers, informed me that our paper had been accepted for publication and presentation at USec. USec will be held in Washington, DC, August 13-17, 2001. Dr. Rubin sent me the reports submitted by the anonymous reviewers.

  5. After editing the paper to address the reviewers’ comments, and to make other improvements, on June 6, 2001 we sent the “camera-ready” version of the paper to USENIX for inclusion in the printed conference proceedings and on USENIX’s web site.

  6. I understand that, in accordance with normal practice, the printed conference proceedings will be distributed to USec attendees on August 12, 2001. The oral presentation of our results at USec is scheduled for August 15, 2001.

XIII.Harm Caused To Me by Defendants’ Behavior

  1. Since Mr. Oppenheim’s letter to me arrived on about April 10, 2001, I have had to devote nearly all of my professional time to dealing with the effects of Defendants’ threats. My research has virtually stopped, and I have had to cut corners on my other professional duties. In computing research, three months is a long time to be idle.

  2. By this point, I have had to spend much more time defending my right to publish than I spent on doing the original SDMI challenge research and writing the paper.

  3. Though Defendants’ behavior has harmed me and the other Plaintiffs directly, it threatens to cause much greater harm by impeding the progress of research and education in computer security.

XIV.Effect of Defendants’ Interpretation of the DMCA on Computer Security Research and Education

  1. Computer security research seeks to understand how to build computer systems, and other information processing systems, that can meet requirements related to the confidentiality, integrity, and availability of information.

  2. Computer security is built on two pillars: synthesis and analysis. Synthesis seeks to design and implement new systems, and analysis seeks to understand the strengths and weaknesses of existing systems. The two advance in tandem: synthesis provides ever-improving systems to be analyzed, and analysis provides the information needed to synthesize stronger systems in the future.

  3. A system designer’s effectiveness improves when he receives constructive criticism on his work. The same is true of the technical community as a whole; when we receive constructive criticism about the current state of the art, we can do a better job in the future. The eminent cryptographer Ronald L. Rivest put it well when he wrote in the Preface to the standard reference book, Handbook of Applied Cryptography, “When a system is ‘broken,’ our knowledge improves, and next year’s system is improved to repair the defect.” (page xxi, Handbook of Applied Cryptography, by Alfred J. Menezes, Paul C. van Oorschot, and Scott A. Vanstone, CRC Press, 1996 (“Handbook”))

  4. Analysis is a respectable, and respected, part of the research process. As Rivest wrote in the Handbook’s Preface, “A good cryptographer rapidly changes sides back and forth in his or her thinking, from attacker to defender and back. Just as in a game of chess, sequences of moves and counter-moves must be considered until the current situation is understood.” (Handbook at page xii)

  5. Analysis cannot be only theoretical. When we think we have found a weakness in a security technology, we try to carry out an attack in the laboratory, to confirm that the weakness is real. For example, if we think we have found a weakness in an encryption algorithm, we try to exploit that weakness to read some test messages that our colleagues have encrypted for us, to see whether we can get access to the data without knowing the decryption key. Of course it is both unethical and illegal to break into other people’s computer systems without their permission, and legitimate researchers never do so.

  6. My research on computer security is funded in part by the Defense Advanced Research Projects Agency (“DARPA”), a part of the U.S. Department of Defense. DARPA requires us, as a condition for continued funding, to file a report describing the results of an analysis of the flaws and vulnerabilities of our work.

  7. Analysis is also an important part of teaching in computer security, for the same reasons it is valuable in research and in practice.

  8. In 1996, I wrote a paper, with David Oppenheimer, entitled “Protocol Failure Analysis in the Applied Cryptography Curriculum,” which I presented at the Conference on Computer Security Education, which was organized by the Naval Postgraduate School. This paper is based on our experience in teaching a seminar on applied cryptography at Princeton in 1995. A copy of this paper is attached as Exhibit 4. The paper argued for a greater reliance on analysis assignments in teaching courses about cryptography. According to the paper’s abstract, “our experience leads us to believe that a course on applied cryptography should include assignments that emphasize protocol analysis, especially finding and correcting flaws in real and hypothetical protocols.”

  9. More specifically, this paper states

The course projects should not end with a presentation and submitted paper. Instead, students should be asked as a final exercise to analyze someone else’s project in an attempt to find and correct perceived flaws in protocol and implementation. In one case this happened spontaneously in our seminar, and it proved to be one of the most valuable learning experiences of the semester.

(page 3) David Wagner, the (then-) undergraduate student who performed this spontaneous analysis, later became a well-known and respected cryptography researcher. He is now on the faculty of the University of California at Berkeley.

  1. I teach a senior-level course at Princeton entitled “Information Security.” When I created this course, I included several assignments that require students to perform analyses of systems and to write reports describing what they found. For most of the course, students alternate between doing synthesis assignments and doing analysis assignments.

  2. I understand that Defendants advocate an interpretation of the DMCA that would outlaw analysis of systems that might be used to control the use of copyrighted materials. I am not a lawyer, so I will not offer an opinion here about whether that is a legally correct interpretation of the DMCA. However, I can say that such an interpretation would effectively prevent analysis of critical systems, and so would have a disastrous effect on education, research, and practice in computer security.

  3. Computer security technology is designed to control the use of information. The technology works in the same way, regardless of whether we want to control the use of that information because of copyrights, or for some other reason. For example, the same technology that controls access to private medical records can also control access to copyrighted works. Any encryption scheme, or any other computer security technology, can be used to protect copyrighted works. Thus if the DMCA prohibits analytic research on technologies that can control the use of copyrighted materials, that prohibition reaches all analytic computer security research.

  4. In practice, virtually any anomaly in the behavior of a computer system may open up the possibility of a security breach. Thus virtually any discussion of behavioral anomalies, regardless of what aspect of the computer system they are associated with, may be painted as a possible DMCA violation.

  5. Even research papers whose main purpose is synthesis (i.e., the design of new systems) often contain an element of analysis. The author of such a paper may want to demonstrate the superiority of his new invention by showing that it overcomes the weaknesses of existing methods --- and this requires discussing what those weaknesses are.

  6. Not only in computer science, but also across all scientific fields, skeptical analysis of technical claims made by others, and the presentation of detailed evidence to support such analysis, is the heart of the scientific method. To outlaw such analysis is to outlaw the scientific method itself.

XV.Ethical Considerations

  1. I understand that Defendants have questioned the ethical propriety of our decision to publish the information in our paper. As a computer security researcher, I have extensive experience in dealing with these ethical issues. I have frequently discussed the general ethical issues in computer security research with my colleagues at other institutions, and with an ethicist from Princeton University’s Center for Human Values. I am convinced that, under the circumstances, we are ethically compelled to publish this information.

  2. In considering the ethics of this situation, it is important to remember that we did not cause the weaknesses of Defendants’ products --- we merely discovered the truth about how weak those products are. The products would have been equally weak had we never done our analysis.

  3. In deciding what to publish, our primary ethical responsibility is to the public interest. I believe that the public interest is served, both in the short run and in the long run, by the disclosure of this information and by public discussion of it. In my view, it would be unethical for us to withhold this information from the public.

  4. The public gains in the short run when it receives truthful information about the effectiveness of products it is being asked to buy. It may well be that the public, having learned that Defendants’ products are not as effective as previously thought, will choose to buy less of those products. Allowing consumers to make better-informed purchasing decisions serves the public interest.

  5. The public benefits in the long run if companies know that if they make grandiose technical claims about their products, the public will eventually find out whether those claims are true. In an industry that is too often hype-driven, scientific evaluation of product claims is an important moderating influence.

  6. The public gains in the long run because the discussion of our results by our colleagues will contribute to scientific understanding of the very difficult technical issues surrounding protection of copyrighted material, leading to better-designed systems in the future.

XVI.Chilling Effect of the DMCA on Speech

  1. The DMCA has already had a chilling effect on my speech, as the events of the last three months have illustrated. Without a favorable ruling from this Court, I expect the chill on my speech to continue. Having made an example of me and my co-authors, the Defendants will find it easier to intimidate others in the future.

  2. The direct chilling effect of the DMCA is easy to see. The indirect chilling effects will be more subtle yet will reach farther. Knowing that they cannot publish the results of research in particular areas, researchers will avoid studying those areas. The result will be that nobody has anything useful to say about those areas --- speech will be chilled, indirectly.

  3. I have experienced this indirect chill already. Graduate students often come to me to ask my advice about what research topic to choose. My recent experience with the DMCA has had a profound impact on the advice I give to these students. I know that when students look for jobs, they will be evaluated based on their publication record, and I know that a typical student will have only a few publications to his name when he graduates. To take away even one potential publication from such a student can be a real blow to his career. Such a student can ill afford to devote time to a project that may prove unpublishable due to the DMCA, and I cannot in good conscience advise him to do so.

  4. As is the norm in academic computer science, almost all of my research is done jointly with graduate students. Even if I am willing to accept, for myself, the risk and anxiety that come from working in the shadow of the DMCA, I cannot subject my students to it.

  5. I believe that many of my colleagues in the research community are already avoiding doing analytical research due to the DMCA. I am sure that many of them are watching this case, to gauge the extent of the DMCA’s impact on their future speech.

XVII.Effect of the DMCA on Scientific Conferences

  1. Many research conferences publish papers that could pose DMCA problems. For example, I was appointed to the program committees of the Workshop on Security and Privacy in Digital Rights Management (“DRM Workshop”), which will be held in Philadelphia in November 2001. (“Digital Rights Management” refers to technologies to control access to, or use of, digital works.) As a program committee member I would be partially responsible for the choice of papers to be published at the DRM Workshop.

  2. In addition, I was appointed as Publications Chair of the ACM Conference on Computer and Communications Security (“CCS”), which will also be held in Philadelphia in November 2001. My duties as Publications Chair include assembling the authors’ papers, having them printed into a bound “Proceedings” document, and distributing the Proceedings at the conference.

  3. Both of these conferences cover subject areas that are affected by the DMCA. CCS covers all areas of computer security, and has published papers related to copy-protection mechanisms in the past. The DRM Workshop is specifically about technologies that control access and copying of copyrighted materials --- that is the main topic of the conference.

  4. The DRM Workshop’s Call for Papers requests the submission of papers on “all theoretical and practical aspects of [Digital Rights Management]”, on “experimental studies of fielded systems”, and on “threat and vulnerability assessment”, among other topics.

  5. The threats against the researchers who volunteered to organize the IHW conference have been discussed widely among computer security researchers. My impression is that many people are worried about the consequences of volunteering to help organize conferences.

  6. Conferences such as CCS and the DRM Workshop rely on researchers who work, on a volunteer basis, as Program Committee members, as Publications Chair, and in other positions. If researchers are unwilling to volunteer for these positions, the conferences cannot be held.

XIii.Effect of the DMCA on Scientific Progress

  1. Scientific progress has been analogized to the construction of a brick wall. It grows by the addition of many small bricks, each supported directly by a few bricks below it, and supported indirectly by a great many bricks, some of which are far away. Removing a brick or two in the middle of the structure can have far-reaching effects. The DMCA, even if it is intended to prevent speech in only one area of science, will have equally far-reaching effects.

  2. Scott Craver’s current research on forensic analysis of digital signals provides one good example of this effect. I understand that Scott describes this research project in his declaration, so I will not duplicate his description here.

  3. Scott’s research, if successful, will provide a capability that is potentially useful to almost any scientist or engineer who needs to analyze digital signals. Any scientist who employs a sensor to measure how the state of the world varies with time is capturing a digital signal, and Scott’s tool could help him or her to better understand that signal.

  4. To give just one example, geophysicists use seismographs to measure signals that capture vibrations of the earth. They use digital signal processing technology to analyze these signals in order to understand earthquakes. The U.S. government also analyzes seismographic signals to detect and characterize other countries’ underground testing of nuclear weapons.

  5. Because of these connections between disciplines, Scott’s research has the potential to improve our understanding of earthquakes, and even to affect national security. Whether it will in fact lead to these benefits will be evident only after he has had a chance to publish his work --- assuming he is allowed to do so.

  6. The impact of the DMCA on science would be bad enough if it only affected the flow of ideas from computer security into other fields. But worse yet, because ideas discovered in other fields often have application in computer security, the chilling effect of the DMCA can reach far into other fields as well.

  7. Suppose, for example, that the signal processing methods that Scott Craver recently discovered had instead been discovered by a seismologist in the course of studying earthquakes, or analyzing foreign nuclear tests. In publishing his improved methods, the seismologist could run afoul of the DMCA. Although his purpose would not be to facilitate copyright infringement, a court might well rule that studying earthquakes, or analyzing nuclear tests, has “only limited commercially significant purpose or use” and so does not immunize him against a DMCA suit.

  8. Of course, our hypothetical seismologist would be unlikely to know whether somebody somewhere was trying to protect copyrighted works by adding spurious echoes to musical recordings. He could do a patent search, but that would not tell him whether any company was protecting such a system as a trade secret. He could study the scientific literature, where he might find the paper by Anderson and Petitcolas that discredited echo-based watermarking, but even companies that use discredited technology have brought DMCA suits (e.g., Universal v. Reimerdes). He would be left with no way to know whether he could safely publish his paper.

  9. Ultimately, the biggest chilling effect of the DMCA comes from the fact that a scientist has no practical way of knowing who might be able to sue him because of his work. Before I began the research that is at issue here, I had never even heard of Verance. Until I received the letter from Mr. Oppenheim, I did not know that RIAA might be able to sue me. I still am not certain of the identity of the four Doe defendants, all of whom apparently have standing to sue me.

  10. The DMCA says that a lawsuit can be brought not only by technology designers, and by copyright owners, but also by “any party injured.” Thus the circle of potential litigants apparently extends beyond the technology companies and copyright owners themselves, to include investors in those companies, customers of those companies who have contracted to purchase their products under the assumption that those products would become standards, and so on. There is simply no way I can determine who all of these people are, so that I can seek their permission or try to guess whether they are likely to sue me.

  11. After the events of the last three months, I suspect that I have the dubious distinction of knowing more about the DMCA than any other computer science researcher. But I still have no idea whose permission I need before I am allowed to publish my next paper.

  12. I have been contacted by an editor at Scientific American magazine, who is interested in having me write an article on “the experience of breaking the digital watermarks.” I am interested in writing such an article, and I am currently exploring the specifics with the editor. I envision the article as being similar the USec paper, but tailored for the larger and broader audience of Scientific American, and therefore including more background and examples.

  13. However, at present I do not know whether RIAA, SDMI, or Verance will prevent me from writing the article or will prevent Scientific American from publishing it. I will not send the article to Scientific American unless there is some resolution by this court of my liability under the DMCA to the defendants, or unless the defendants agree to waive all claims respecting my publication and discussion of the article. I refuse to show the defendants a prepublication copy.

I declare under penalty of perjury that the foregoing is true and correct. Executed on August 12, 2001, in Palo Alto, California.


Edward W. Felten

Declaration of Edward W. Felten

Page 38

Please send any questions or comments to webmaster@eff.org.