Additions, clarifications, and corrections regarding the content of this document will be most graciously accepted: please send email to hhochheiser@cpsr.org.
Rating: Value judgments are used to categorize web sites based on their content. These ratings could use simple allowed/disallowed distinctions like those found in programs like CyberSitter or NetNanny, or they can have many values, as seen in ratings systems based on Platform for Internet Content Selection (PICS, see question 3.0).
Filtering: With each request for information, the filtering software examines the resource that the user has requested. If the resource is on the "not allowed" list, or if it does not have the proper PICS rating, the filtering software tells the user that access has been denied and the browser does not display the contents of the web site.
The first content filters were stand-alone systems consisting of mechanisms for determining which sites should be blocked, along with software to do the filtering, all provided by a single vendor.
The other type of content filter is protocol-based. These systems consist of software that uses established standards for communicating ratings information across the Internet. Unlike stand-alone systems, protocol-based systems do not contain any information regarding which sites (or types of sites) should be blocked. Protocol-based systems simply know how to find this information on the Internet, and how to interpret it.
Filters and ratings systems are seen as tools that would provide the cyberspace equivalent of the physical separations that are used to limit access to "adult" materials. In rating a site as objectionable, and refusing to display it on the user's computer screen, filters and ratings systems can be used to prevent children from seeing material that their parents find objectionable. In preventing access, the software acts as an automated version of the convenience-store clerk who refuses to sell adult magazines to high-school students.
Filters are also used by businesses to prevent employees from accessing Internet resources that are either not work related or otherwise deemed inappropriate.
Whether used in homes or workplaces, these tools raise serious privacy concerns.
List-based blocking works by explicitly enumerating sites that should either be blocked or allowed. These lists are generally provided by filter vendors, who search for sites that meet criteria for being classified as either "objectionable" or "family-friendly".
Filtering software vendors vary greatly in the amount of information and control they make available to users. Most vendors do not allow users to see the actual list of blocked sites, as it is considered to be a kind of trade secret. However, some vendors provide detailed descriptions of the criteria used to determine which sites should be blocked. Some vendors might allow users to add sites to the list, either in their own software or by sending sites to the vendor for review.
Stand-alone filtering tools also vary in the extent to which they can be configured by users. Some software packages allow users to make selections from a list of the categories they would like blocked. For example, a parent may wish to block explicit sex but not discussions of homosexuality as a life-style. Others might allow users to choose from a range of choices in any given topic area. For example, instead of simply blocking all nudity, these tools might allow users to chose to allow partial nudity while blocking full nudity.
Keyword-based blocking uses text searches to categorize sites. If a site contains objectionable words or phrases, it will be blocked.
First, these lists are incomplete. Due to the decentralized nature of the Internet, it's practically impossible to definitively search all Internet sites for "objectionable" material. Even with a paid staff searching for sites to block, software vendors cannot hope to identify all sites that meet their blocking criteria. Furthermore, since new web sites are constantly appearing, even regular updates from the software vendor will not block out all adult web sites. Each updated list will be obsolete as soon as it is released, as any as any site that appears after the update will not be on the list, and will not be blocked. The volatility of individual sites is yet another potential cause of trouble. Adult material might be added to (or removed from) a site soon after the site is added to (or removed from) a list of blocked sites.
Blocking lists also raise problems by withholding information from users, who may or may not have access to information describing the criteria used to block web sites. While some vendors provide descriptions of their blocking criteria, this information is often vague or incomplete. Several vendors have extended blocking beyond merely "objectionable" materials. In some instances, political sites and sites that criticize blocking software have been blocked.
This obscurity is compounded by practices used to protect these lists of blocked sites. Vendors often consider these lists to be proprietary intellectual property, which they protect through mathematical encryption, which renders the lists incomprehensible to end users. As a result, users are unable to examine which sites are blocked and why. This arbitrary behavior demeans the user's role as an active, thoughtful participant in their use of the Internet.
Keyword searches cannot use contextual information. While searches can identify the presence of certain words in a text, they cannot evaluate the context in which those words are used. For example, a search might find the word "breast" on a web page, but it cannot determine whether that word was used in a chicken recipe, an erotic story, or in some other manner. In one notable incident, America Online's keyword searches blocked a breast cancer support group.
Keyword searches cannot interpret graphics. It is not currently possible to "search" the contents of a picture. Therefore, a page containing sexually explicit pictures will be blocked only if the text on that page contains one or more words from the list of words to be blocked.
The Massachusetts Institute of Technology's World Wide Web Consortium has developed a set of technical standards called PICS (Platform for Internet Content Selection) so that people can electronically distribute descriptions of digital works in a simple, computer-readable form. Computers can process these labels in the background, automatically shielding users from undesirable material or directing their attention to sites of particular interest. The original impetus for PICS was to allow parents and teachers to screen materials they felt were inappropriate for children using the Net. Rather than censoring what is distributed, as the Communications Decency Act and other legislative initiatives have tried to do, PICS enables users to control what they receive.There are two components involved in the practical use of PICS: ratings systems, and software that uses ratings systems to filter content.
PICS-based software uses an alternative approach based on distributed sharing of ratings information. Instead of using blocking lists or keyword searches, programs that use PICS use standardized "ratings systems" to determine which sites should be blocked. Available from software vendors or from Internet sites, these ratings systems are be used to describe the content of Internet sites (see question 3.7 for a description of how PICS works in practice). Users of PICS-based software are usually given the ability to choose which ratings system they would like to use.
As an open standard, PICS can be used for a wide range of applications. In addition to providing a means for blocking content deemed unsuitable for children, PICS might also be used for describing content in terms of its educational content, potential for violations of privacy, or any other criteria that involve rating of Internet sites.
In some senses, programs that use PICS are much more flexible than stand-alone filtering software. Users of PICS software are not tied to the judgments of the software vendor, and the descriptions of the criteria used by the ratings systems are publicly available. However, users are currently limited to choosing between a small number of ratings systems, each of which has its own biases and viewpoints. Users that disagree with the popular ratings systems may be unable to use PICS in a manner that fits their needs and viewpoints.
A rating is a description of some particular Internet content, using the terms and vocabulary of some ratings system.
Self-Rating: Web site publishers can evaluate their own content and put PICS rating information directly into their web pages. Currently, this evaluation can be done through Web pages provided by developers of the major ratings services.
Third-Party Ratings: Interested third parties can use PICS ratings systems to evaluate web sites and publish their own ratings for these sites. Educational groups, religious groups, or individuals can rate sites and publish these ratings on the Internet for users to access.
Your browser software may influence choice of ratings service. If you use Microsoft's Internet Explorer, you only have one choice (RSACi) built in to the initial distribution. To use other ratings services, IE users must download files from the 'Net and install them on their PCs.
Currently (as of September 1997), there are three PICS services that are being widely used or promoted:
RSACi: Sponsored by the Recreational Software Advisory Council (known for ratings on video games), RSACi is probably the most widely used PICS ratings system in use today. RSACi's ratings categories include violence, nudity, sex, and language, with 5 ratings within each category. As of September 1997, RSACi claims to have over 43,000 sites rated.
SafeSurf: Developed by the SafeSurf corporation, this system's categories include "Age Range," "Profanity," "Heterosexual Themes," "Homosexual Themes," "Nudity," "Violence," "Sex, Violence, and Profanity, " "Intolerance," "Glorifying Drug Use," "Other Adult Themes," and "Gambling," with 9 distinctions for each category.
SafeSurf and RSACi both rely on self-rating of Internet sites by web publishers.
NetShepherd: Based in Calgary, Net Shepherd rates sites based on quality levels (1-5 stars). Unlike SafeSurf and RSAC, NetShepherd conducts third-party ratings of web sites. They claim to have rated over 300,000 sites. NetShepherd has also announced partnerships with firms such as Altavista and Catholic Telecom, Inc.
Once these choices have been made, the browser software uses them to filter sites. When an Internet site is requested, the browser compares the site's rating with the user's selection. If the site has ratings for the chosen system and those ratings fit within the parameters chosen by the user, it is displayed as usual. If the appropriate ratings fall outside of those parameters (perhaps the site has "frontal nudity," while the user was only willing to accept "partial nudity"), access to the site is prohibited, and the user is shown a message indicating that the site is blocked.
Since most web sites are not currently rated, most software provides users with the option of blocking out sites that do not contain PICS ratings.
In order to prevent mischievous children from changing ratings or disabling PICS altogether, most browsers can be configured to require a password before disabling PICS.
RSACi, SafeSurf, and other proponents of ratings would obviously like everyone to rate their sites, while civil libertarians and opponents of ratings argue against any ratings.
Publishers of family-oriented sites or those who are trying to reach audiences concerned with Internet content might consider rating. Similarly, purveyors of adult material might rate their sites in order to be "good citizens".
In evaluating ratings systems, publishers may want to examine the categories used by each system and the distinctions used by those categories. Different systems will classify ratings systems in different ways, some of which may misrepresent the content of web sites. For example, sites discussing safe sex might not want to be placed in the same category with pornographic sites.
Web site publishers might also consider the popularity of the ratings services. Currently (as of September 1997), there are only a few major ratings services. Publishers are free to user other ratings, but these may not be useful to the Internet users who rely upon the popular systems. This presents a dilemma for some publishers, who can either accept the ratings of the popular systems, even if those ratings misrepresent their material, or refuse to rate their sites, knowing that this might cause their sites to be unavailable to some users.
Versions of Microsoft's Internet Explorer have provided an extreme example of this problem. Although IE allows user to use any PICS ratings system, RSACi is the only system that is built in to the selection list. Since Internet Explorer is the most widely-used PICS-capable browser (as of fall 1997, Netscape's Navigator does not support PICS), it seems likely that many PICS users will be relying upon RSACi. For publishers interested in reaching a wide audience, this market force may determine their choice of ratings system.
Finally, philosophical concerns may cause some people to decide not to rate. Web-site publishers who are not comfortable with the general content of available ratings systems, or who object to the concept of ratings, may choose not to rate their own sites.
MSNBC's troubles with ratings provide an ironic illustration of this possibility. Displeased with the RSACi ratings that would be necessary, MSNBC management removed all rating information from the site. MSNBC and other news organizations briefly discussed the possibility of creating a new ratings system specifically for news reporting.
While this proposal was eventually rejected, it illustrates some of the problems with content ratings. Well-funded publishers like MSNBC might be able to effectively create ratings systems that meet their needs, but smaller publishers who want to rate their sites may be forced to accept unsatisfactory ratings.
To make matters worse, third party rating does not require the consent or even notification of a web-site publisher. Since third party ratings are distributed by third party "label bureaus," a web-site publisher may not know if her pages have been rated, or what the ratings said.
Third-party ratings also present significant technical challenges that may discourage their development. Unlike self-ratings, third party PICS ratings do not reside on publisher's web pages. Instead, they must be distributed to users using one of two methods:
Some software, such as Microsoft's Internet Explorer, provides users with the option of blocking out any site that does not have a rating. This choice may be appropriate for some, but it severely restricts the available options. By blocking out most of the Web (including possibly some sites designed for younger users), this approach presents children with a severely restricted view of the world.
The accuracy of PICS ratings is obviously a concern. For example, unscrupulous purveyors of adult material might attempt to use an inaccurate rating in an attempt to slip through PICS filters. In RSACi's terms of use, the RSAC reserves the right to audit sites in order to guarantee accuracy of ratings. SafeSurf takes this one step further. The proposed Online Cooperative Publishing Act calls for legal penalties for sites that label inaccurately, or refuse to rate. In June 1997, Sen. Patty Murray (D-Washington) proposed the Child-safe Internet Act of 1997, which called for similar penalties. While these legislative suggestions might be effective in promoting the use of ratings, they raise serious concerns in terms of first-amendment rights and possibilities for overly aggressive enforcement. Question 4.1 discusses these possibilities in more depth. There are currently no quality controls on third-party ratings.
These issues of quality and accountability would become even trickier if numerous schemes were to come into use. If there were dozens of PICS ratings schemes to choose from, publishers would not know which to choose, and users might not know which to trust.
The first - and currently the only viable alternative - is to avoid use of PICS for self-rating, and in Internet browsers.
The second approach would be to develop a new ratings vocabulary, as an alternative to RSACi, SafeSurf, or other currently available ratings systems. This involves several steps:
The first step is generation of a ratings system, including categories that would be discussed and distinctions within those categories. This would require a discussion of the values that will be represented in the ratings system, and how these values should be expressed.
Once the system has been developed, sites must be rated. This can be done in one of two ways:
Given the significant resources that will be needed to effectively deploy a new ratings system, it seems unlikely that there will be a large number of PICS alternatives available in the near future. The developers of PICS are trying to change this through the PICS Incubator project, which offers resources to organizations interested in developing new ratings systems.
Book reviews and movie ratings are only two examples of the many ways in which we use information filters. Used in conjunction with other information sources - including advertising and word-of-mouth - these ratings provide a basis for making informed decisions regarding information.
Unfortunately, PICS does not currently provide users with the contextual information and range of choices necessary for informed decision making. When deciding which movies to see, we have access to reviews, advertisements and trailers which provide information regarding the content. These details help us choose intelligently based on our values and preferences. On the other hand, PICS-based systems do not provide any contextual detail: users are simply told that access to a site is denied because the site's rating exceeds a certain value on the rating scale.
Furthermore, the limited range of currently available PICS ratings system does not provide users with a meaningful choice between alternatives. Parents who are not comfortable with any of the current ratings systems may not find PICS to be a viable alternative.
Continuing with our analogies to other media, consider book reviews in a world where only two or three publications reviewed books. This might work very well for people who agree with the opinions of these reviewers (and, of course, for the reviewers themselves!), but it would work very poorly for those who have differing viewpoints.
Some might argue that the "success" of a single set of movie ratings offers a model for PICS. However, ratings are generally applied only to movies made for entertainment by major producers. Documentaries and educational films are generally not rated, but similar web sites could be rated under PICS.
Movie ratings also provide a cautionary lesson that should be considered with respect to the Internet. Unrated movies, or movies with certain ratings, often have a difficult time reaching audiences, as they may not be shown in certain theaters or carried by large video chains. This has led to self-censorship, as directors trim explicit scenes in order to avoid NC-17 ratings. This may be appropriate for commercially-oriented entertainment, but it could be dangerous when applied to safe-sex information on the Internet.
Ratings systems also fail to account for the global nature of the Internet. Legal or practical pressures aimed at convincing Internet publishers to rate their own sites will have little effect, as these businesses or individuals have the option of simply moving their material to a foreign country. Furthermore, the existing ratings systems are of limited value to those in countries that do not share western values.
Concerns about unrated international material or differing cultural values could be addressed through direct censorship. For example, governments might use PICS ratings or proprietary filtering software to implement "national firewalls" which would screen out objectionable material. Alternatively, ratings might be used to "punish" inappropriate speech. If search engines chose to block sites with certain ratings (or unrated sites), or if browsers blocked certain ratings (or lack of ratings) by default, these sites might never be seen.
It is possible that a wide range of PICS ratings system could come into use, providing families with a real opportunity to choose ratings that meet their values. The utility of PICS might also be increased by use of new technologies like "metadata" (data about data, used to describe the content of web pages and other information resources), which might be used to provide contextual information along with PICS ratings. However, these tools may not be available for general use for some time, if at all.
Some people confuse ratings with the topical organization that is used in libraries and Web sites like Yahoo. While no system of organization of information is neutral, topical schemes attempt to describe what a resource is "about". Rating rarely helps us find information resources topically and is usually too narrowly focused on a few criteria to be useful for information retrieval.
If this question is taken to mean: "Are there any solutions that would provide children with the ability to use the Internet without ever seeing material that is explicit or "adult,"the answer is probably yes. This would require a combination of three factors:
If the question is interpreted as meaning: "Are there any solutions that provide some protection from adult or objectionable material without restricting free speech?" the answer is much less clear. Stand-alone systems clearly don't meet these criteria, as they place users at the whims of software vendors, who may block sites for arbitrary reasons. In theory, PICS might fit this role, but the lack of a meaningful choice between substantially different ratings systems leaves parents and publishers with the choice of using ratings that they may not agree with, or that fail to adequately describe their needs or materials.
Describing speech as "adult" or "appropriate for children" is inherently a tricky and value-laden process. In the U.S., many people have attempted to prevent schools and libraries from using everyday publications like Huckleberry Finn and descriptions of gay/lesbian lifestyles. The fierce debates over these efforts show that no consensus can be reached. Increased use of filtering software would likely be the beginning, rather than the end, of debates regarding what Internet materials are "appropriate" for children, and who gets to make that decision.
Secondly, parents should play an active role and interest in their children's use of the Internet. For some children this might mean restricting Internet use to closely supervised sessions. Other children might be able to work with clearly defined rules and guidelines. To discourage unsupervised use of the Internet, parents might consider measures such as placing the family computer in a common space in the home and retaining adult control over any passwords required for Internet access.
Parents should also work to educate children regarding proper use of the Internet. Just as parents teach children not to talk to strangers on the street, parents might discourage children from visiting certain web sites, divulging personal or family information, or participating in inappropriate chats.
Some parents might consider using filtering software, despite all of the potential drawbacks. Parents considering this route should closely examine their options, in order to understand their options and the implications of any choice.
For stand-alone filtering systems, this means investigating the criteria used in developing blocking lists and/or news reports describing the software. If possible, parents might try to find stand-alone systems that allow users to view and edit the lists of blocked sites.
Parents considering the use of PICS systems should investigate the categories used by the various ratings systems, in order to find one that meets their needs. Information about PICS-based systems can be found at the home pages of the respective ratings systems.
In general, the use of a filtering product involves an implicit acceptance of the criteria used to generate the ratings involved. Before making this decision, parents should take care to insure that the values behind the ratings are compatible with their beliefs.
Finally, parents should realize that the Internet is just a reflection of society in general. Much of the "adult" content on the Internet can be found on cable TV, at local video stores, or in movie theaters. Since other media fail to shield children from violence or sexual content, restrictions on the Internet will always be incomplete.
ISP-Based Filtering: ISPs might do the filtering themselves, preventing their customers from accessing objectionable materials, even if those customers do not have their own filtering software. This requires the use of a proxy server, which would serve as a broker between the ISP's customers and remote web sites. When a customer of a filtering ISP wants to see a web site, his request goes to the proxy server operated by the ISP. The proxy server will then check to see if the site should be blocked. If the site is allowable, the proxy server retrieves the web page and returns it to the customer.
This approach is technically feasible. In fact, it's currently used by many corporations, and some ISPs that offer this service. However, proxying requires significant computational resources that may be beyond the means of smaller ISPs. Even if the ISP can afford the computers and Internet bandwidth needed, this approach is still far from ideal. In order to do the filtering, proxy servers would have to use stand-alone or PICS-based systems, so they would be subject to the limitations of these technologies (see 2.4, 2.5, and 3.13). The shortcomings of existing filtering systems may prove particularly troublesome for ISPs that advertise filtering services, as these firms could be embarrassed or worse if their filters fail to block adult material. Finally, ISPs that filter material may lose customers who are interested in unfiltered access to the Internet.
Providing Filtering Software: Others have suggested that ISPs should be required to provide users with filtering software. While this might be welcome by parents who are thinking about getting on to the 'Net (and by software vendors!) it could present a financial serious burden for smaller ISPs.
Most advocates of the use of blocking software by libraries have forgotten that the public library is a branch of government, and therefore subject to First Amendment rules which prohibit content-based censorship of speech. These rules apply to the acquisition or the removal of Internet content by a library. Secondly, government rules classifying speech by the acceptability of content (in libraries or elsewhere) are inherently suspect, may not be vague or overbroad, and must conform to existing legal parameters laid out by the Supreme Court. Third, a library may not delegate to a private organization, such as the publisher of blocking software, the discretion to determine what library users may see. Fourth, forcing patrons to ask a librarian to turn off blocking software has a chilling effect under the First Amendment.
Fahrenheit 451.2: Is Cyberspace Burning? - The ACLU's Report on Filtering Software
Peacefire(anti-ratings group)
The Censorware Project The Global Internet Liberty Campaign has an excellent page on ratings and filters.
The Internet Free Expression Alliance is a coalition of groups working to preserve open expression on the Internet.
Computer Professionals for Social Responsibility (CPSR)