Slate e-mag has published a number of letters in response to Eugene Volokh's article, "Speech or Spillover" in Slate. That article can be found at this URL: . Volokh has responded to the responses in Slate's new letters column, posted today. Both the reader mail and Volokh's response to that mail can be found at this URL: . After that mail column is (in Slate-speak) "composted," it will be findable at this URL: . Professor Hal Abelson and I composed a lengthy critique of the Volokh article -- too long, it turns out, for the editors of Slate, to whom we submitted it. But you can still read it either in this posting or at: . (It will also be available shortly at my homepage: .) For those who don't want to take fire up a browser just now, here's the text of Hal's and my response: **Response to Volokh article("Speech and Spillover," posted Thursday, July 18)** By Mike Godwin and Hal Abelson 30 July 1996 To the editors of Slate: It is indisputable that the Communications Decency Act raises complex constitutional and factual questions about the government's prerogative under the First Amendment to protect children from certain kinds of speech. It is equally indisputable that Prof. Eugene Volokh should be commended for attempting to clarify those questions in his recent contribution to Slate ("Speech and Spillover," posted Thursday, July 18) . Sadly, however, Prof. Volokh's efforts may have created more confusion than they dispelled. Not only does Volokh blur the constitutional issues raised by legislation like the CDA, but he also misinforms Slate readers about the function and effectiveness of software content filters -- facts that are central to understanding the public debate about regulating content on the Net. There are many problems with Prof. Volokh's First Amendment discussion, but most seem to follow from two basic errors. First, Volokh fails to note that the Supreme Court has conditioned the scope of the government's authority to broadly regulate *constitutionally protected* content (such as nonobscene sexual content) on the specific character of the medium distributing that content. To risk oversimplifying, we may say that the Court has allowed the government greater authority to regulate "indecent" content either when broadcast (e.g., radio broadcasting in the Pacifica case) or delivered in a manner indistinguishable in character -- to the audience, at least -- from broadcasting (e.g., cable television in this year's Denver Consortium case). Secondly, Volokh conflates three distinct (if overlapping) categories of content: "indecent," "sexually explicit," and (by implication) "pornographic." In doing so, he reinforces a common confusion about the Communications Decency Act -- namely, that its reach was limited only to pornographic material. But as the judges in ACLU v. Reno noted, the terms of the CDA criminalized a far broader range of speech -- speech that is "indecent" or "patently offensive" - much of which is not "sexually explicit" as those words are normally understood. (Not all speech that's indecent or patently offensive is about sex, Howard Stern notwithstanding.) The judges also observed that the plaintiffs in ACLU v. Reno (ranging from Microsoft and Wired magazine to organizations such as Human Rights Watch and the National Writers Union) were easily distinguishable from the commercial pornographers whose dial-a-porn services were at issue in Sable Communications v. FCC (1989). That's why it's particularly troubling to see Prof. Volokh cite Sable, a case about *regulating minors' access to commercial pornography*, in support of a more general claim that government has broad power to regulate nonpornographic "indecent" or "sexually explicit" content in the interest of protecting children.. (Justice White, writing for the Court in Sable, does not go so far. Instead, he relies on two cases that, like Sable, involve pornography and minors. White never expressly states in Sable that the government has constitutional authority to regulate - regardless of the medium -- the far broader category of speech called "indecency.") These two basic legal errors give rise to other problems with Prof. Volokh's constitutional analysis. Most notably, he suggests compulsory labelling of online content without mentioning the First Amendment problem of "compelled speech" that clearly would arise, and without discussing whether such compulsory labelling would be constitutional if imposed on books and newspapers. (Medium-specific analysis suggests it wouldn't be, and factual record in ACLU v. Reno seems to entail the same conclusion about compulsory labelling on the Internet.) But perhaps the single most disturbing error in his article has to do with the facts, not the law. In order to support his thesis that technical solutions will never resolve what he sees as a perennial "spillover" problem, Volokh attempts to raise doubts about the effectiveness of selection/filtering software such as Surfwatch: The SurfWatch solution is limited by the software designers' ability to keep up with the latest "dirty" places. Dozens of Web sites are being added daily, and you never know what will get posted tomorrow even on existing sites or newsgroups. Some things will inevitably be missed. The purely technological fix, then, is less restrictive than the CDA, but it's also less effective. ... What Prof. Volokh implies here (that filters rely solely or primarily on a list of 'dirty places') is wholly false -- not just about Surfwatch, but about filtering software andfiltering paradigms generally. We know of no product that operates as Volokh suggests Surfwatch does -- while many such programs do include specific lists of objectionable sites, *this is not the primary approach any of these programs rely on to filter content*. This is true even though filtering paradigms may differ among products: Surfwatch uses multiple approachs, including keyword- and pattern matching algorithms; the company uses its "blocked site" list as a supplement to its core filtering technologies. Netview's Specs For Kids program, in contrast, doesn't use a "blocking" strategy at all -- instead it reviews and rates sites (147,000 as of this writing), and admits minors only to those pre-approved sites. Surfwatch's continuing success during a period in which the total number of Web sites has boomed undercuts Volokh's generalization about the effect of the boom on these filters' effectiveness. This makes sense -- block the word "sex" in a Web address, and it doesn't matter if the number of Web addresses including the word "sex" has increased tenfold since last year. And it's difficult to see how the effectiveness of the Specs For Kids approach can be diminished by the boom, even in theory. Volokh's analysis of filters, together with his mandatory-labelling suggestion, also show a lack of awareness of the labelling infrastructure that software vendors and the rest of the network industry are increasingly accepting as a standard -- the Platform for Internet Content Selection (PICS). PICS was developed by a cross industry working group coordinated by the World Wide Web Consortium, and is described in the paper "PICS: Internet Access Controls Without Censorship", by Paul Resnick and James Miller. This paper (to appear in the Communications of the ACM) as well as other material on PICS can be found on the World Wide Web at http://www.w3.org/pub/WWW/PICS. PICS is a set of conventions that describe formats for labelling internet content, and methods for how labels are distributed. PICS does not dictate what the labels should say or how they should be used. To quote Resnick and Miller, PICS is "analogous to specifying where on a package a label should appear, and in what font it should be printed, without specifying what it should say." The intent of this flexibility is to support a wide variety of labelling systems and selection methods. For instance, one might configure a Web browser to screen out material that carries certain labels. This is the system imagined by Prof. Volokh, but it is only one approach. As an alternative, one might make accessible only those Web pages that are labelled in a particular ways, for example Web pages that carry the "seal of approval" of various organizations. This second approach, functionally similar to that of Specs For Kids, would address Prof. Volokh's concern about "keeping up with" new Web sites. And these are only the simplest applications; PICS was expressly designed to be an open-ended system that permits multiple labelling services and multiple ways of using labels; unlike the CDA, PICS can be used for purposes other than screening out sexual or offensive content. In creating a standard for interoperability, the PICS designers envisioned the growth of a competitive market in third-party rating services, where the pressures of competition will help assure that current and future labels are timely and accurate. They also envisioned a competitive market in selection software, leading to increasingly sophisticated techniques for using those labels. As Resnick and Miller write: Around the world, governments are considering restrictions on on-line content. Since children differ, contexts of use differ, and values differ, blanket restrictions on distribution can never meet everyone's needs. Selection software can meet diverse needs, by blocking reception, and labels are the raw materials for implementing context-specific selection criteria. The availability of large quantities of labels will also lead to new sorting, searching, filtering, and organizing tools that help users surf the Internet more efficiently. The free-market evolutionary approach may not be perfect, but it is counterintuitive to assume, as Prof. Volokh apparently does, that saddling the system with CDA-derived regulations could make it more effective or efficient. If anything, such a regulation would likely have the opposite effect. Imposing a single, federally approved standard for the kinds of constitutionally protected content that government can banish from public forums in the name of protecting minors seems likelier to skew the market. It would diminish the ability parents now have to decide for themselves which solution is most effective. (And the marketplace of ideas wouldn't exactly be enhanced, either.) Ironically, those who rely on either Prof. Volokh's constitutional "spillover" analysis or his assessment of software filters may feel compelled to craft laws that ensure we never escape from the "spillover" problem: laws that needlessly set adults' First Amendment rights against the state's interest in protecting children. That would be a shame, since the technical solutions that Volokh dismisses carry the promise of avoiding his "spillover" problem altogether. Thanks to these inexpensive and highly adaptable tools, two important social interests -- the protection of children and the preservation of First Amendment rights -- need no longer be viewed as opponents in a zero sum game. Mike Godwin Staff Counsel Electronic Frontier Foundation Hal Abelson Professor of Computer Science and Engineering Massachusetts Institute of Technology --------- --------- Mike Godwin EFF 510-548-3290