Spinello_Cyberethics_pp_45-69
18 Pages

Spinello_Cyberethics_pp_45-69

Course Number: EPS 304, Summer 2009

College/University: University of Illinois,...

Word Count: 10766

Rating:

Document Preview

Free Speech and Content Control in Cyberspace Introduction The Internet has clearly expanded the potential for individuals to exercise their First Amendment right to freedom of expression. The `net gives all of its users a vast expressive power if they choose to take advantage of it. For example, users can operate their own bulletin boards, publish electronic newsletters, or establish a...

Unformatted Document Excerpt
Coursehero >> Illinois >> University of Illinois, Urbana Champaign >> EPS 304

Course Hero has millions of student submitted documents similar to the one
below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support.

Course Hero has millions of student submitted documents similar to the one below including study guides, practice problems, reference materials, practice exams, textbook help and tutor support.

Speech Free and Content Control in Cyberspace Introduction The Internet has clearly expanded the potential for individuals to exercise their First Amendment right to freedom of expression. The `net gives all of its users a vast expressive power if they choose to take advantage of it. For example, users can operate their own bulletin boards, publish electronic newsletters, or establish a home page on the Web. According to Michael Godwin, the `net "puts the full power of `freedom of the press' into each individual's hands." 1 Or as the Supreme Court eloquently wrote in its Reno v. ACLU decision, the Internet enables an ordinary citizen to become "a pamphleteer, . . . a town crier with a voice that resonates farther than it could from any soapbox." 2 As a result, the issue of free speech and content control in cyberspace has emerged as arguably the most contentious moral problem of the nascent Information Age. Human rights such as free speech have taken a place of special prominence in this century. In some respects, these basic rights now collide with the state's inclination to reign in this revolutionary power enjoyed by Internet users. Although the United States has sought to suppress online pornography, the target of some European countries, such as France and Germany, has been hate speech. In addition, speech is at the root of most other major ethical and public policy problems in cyberspace, including privacy, intellectual property, and security. These three issues are discussed in future chapters where the free speech theme continues to have considerable saliency. Restrictions on the free flow of information to protect privacy (such as the mandatory opt in requirement in Europe) clearly amount to a restraint on the communication of information. Therefore, this effort to protect privacy is a notable free speech issue. Intellectual property rights are also tantamount to restrictions on free speech. If someone has property rights to a trademark, others cannot use that form of expression freely. Finally, one way in which users seek to secure their data is encryption, but encryption in the wrong hands could be a threat to national security, and therefore, many argue that encryption needs to be subject to government control. But shouldn't the right to free speech include the right to protect it from cybersnoopers by means of encryption? Thus, many of the most intractable difficulties in cyberspace can be reduced to the following question: what is the appropriate scope of free expression for organizations and individuals? Those who pioneered Internet technology have consistently asserted that the right to free expression in cyberspace should have as broad a scope as possible. For many years, the government was reluctant to restrict or filter any form of information on the network for fear of stifling an atmosphere that thrives on the free and open exchange of ideas. However, the increased use of the Internet, especially among more vulnerable segments of the population (such as young children), forced some public policy makers to rethink this laissez faire approach. In the United States, the result has been several futile attempts to control Internet content through poorly crafted legislation. An unfortunate byproduct of this has been publicity and attention to this matter that is probably out of proportion to the depth or gravity of the problem. Despite the calls for regulation, there is a powerful sentiment among many Internet stakeholders to maintain this status quo. The strongest voices continue to come from those who want to preserve the Internet's libertarian spirit and who insist that the surest way to endanger 2 the vitality of this global network are onerous regulations and rules, which would stifle the creative impulses of its users and imperil this one last bastion of free, uninhibited expression. In this chapter, we focus on those problematic forms of free expression, such as pornography, hate speech, and even the nuisance speech known as spam (unsolicited commercial email). In the context of this discussion, we consider whether the libertarian ethic favoring broad free speech rights still has validity despite the growing complexity and the diverse user community now found in cyberspace. Pornography in Cyberspace Before we discuss the U.S. Congress' recent efforts to regulate speech on the `net we should be clear about what constitutes pornographic speech. There are two broad classes of such speech: (1) obscene speech, which is completely unprotected by the First Amendment, and (2) "indecent" speech, which is not obscene for adults but should be kept out of the hands of children under the age of seventeen. In Miller v. California), the Supreme Court established a three part test to determine whether or not speech fell in the first category and was obscene for everyone. To meet this test, speech had to satisfy the following conditions: (1) it depicts sexual (or excretory) acts explicitly prohibited by state law; (2) it appeals to prurient interests as judged by a reasonable person using community standards; and (3) it has no serious literary, artistic, social, political, or scientific value. Child pornography is an unambiguous example of obscene speech. The second class of speech, often called indecent speech, is obscene for children but not for adults. The relevant legal case is Ginsberg v. New York, which upheld New York's law banning the sale of speech "harmful to minors" to anyone under the age of seventeen. The law in dispute in the Ginsberg case defined harmful to minors as follows: "that quality of any description or representation, in whatever form, of nudity, sexual conduct, sexual excitement, or sado masochistic abuse, when it: (1) predominantly appeals to the prurient, shameful, or morbid interests of minors, and (2) is patently offensive to prevailing standards in the adult community as a whole with respect to what is suitable for minors, and (3) is utterly without redeeming social importance for minors." 3 Although state legislatures have applied this case differently to their statutes prohibiting the sale of obscene material to minors, these criteria can serve as a general guide to what we classify as "Ginsberg" speech, which should be off limits to children under the age of seventeen. Public Policy Overview The Communications Decency Act (or CDA I) The ubiquity of both forms of pornography on the Internet is a challenge for lawmakers. As the quantity of communications grows in the realm of cyberspace, there is a much greater likelihood that people will become exposed to forms of speech or images that are offensive and potentially harmful. If you are seeking to send an email to the President of the United States and accidentally retrieve the Web site www.whitehouse. com instead of www.whitehouse.gov, you will see what we mean. By some estimates, the Internet currently has about 280,000 sites that cater to various forms of pornography, and some sources report that there is an average of an additional 500 sites coming online everyday, hence the understandable temptation of governments to regulate and control free expression on the Internet to contain the negative effects of unfettered free speech on this medium. The Communications Decency Act (CDA), 3 recently ruled unconstitutional by the U.S. Supreme Court, represented one such futile, and some say misguided, attempt at such regulation. One impetus behind the CDA was a flawed 1995 Carnegie Mellon study published in the Georgetown Law Review, which surveyed 917,410 computer images and found that 83.5% of all computerized photographs available on the Internet were pornographic. The Carnegie Mellon researchers also confirmed that online pornography was not only ubiquitous but also quite profitable for its many purveyors. In addition, those images were not just of naked women but involved pedophilia and paraphilia (images of bondage and sadomasochism). The results of this alarming study were reported in a famous Time magazine cover story titled "Cyberporn." According to the Time article, "The appearance of material like this on a public network accessible to men, women, and children around the world raises issues too important to ignore--or to oversimplify." 4 The sensational Time article greatly heightened interest in the CDA. The bill's sponsor, Senator Exon, cited the Carnegie study as proof that passage of this legislation was essential. There was indisputable evidence, however, that parts of the study were spurious. Marty Rimm, a Carnegie Mellon undergraduate, was the study's lead researcher and author. The bulk of Rimm's data came from 68 bulletin board services (BBSs), some of which were adult BBSs, and yet Rimm certainly gave the impression that his study was based on and applied to the whole "information superhighway." According to Michael Godwin, "to generalize from commercial porn BBSs to the `information superhighway' would be like generalizing from Times Square adult bookstores to the print medium." 5 Nonetheless, thanks in part to the publicity generated by this study's findings and the Time cover story, The CDA, was passed by Congress and signed by President Clinton in 1996. Congress was especially worried about the direct negative effects of easily accessible pornographic material on children. It recognized that this medium erected few obstacles between gross and explicit material and curious children navigating their way through cyberspace. Congress also referred to a secondary effect: the ready availability of pornographic material might make parents less inclined to allow Internet use in their households, which would diminish the Internet's utility. The CDA included several key provisions that restricted the distribution of sexually explicit material to children. It imposed criminal penalties on anyone who "initiates the transmission of any communication which is . . . indecent, knowing that the recipient of the communication is under 18 years of age." It also criminalized the display of patently offensive sexual material "in a manner available to a person under 18 years of age." 6 Defenders of the CDA contended that this was an appropriate way of channeling pornographic or "Ginsberg" speech on the Internet away from children. It did not seek to ban adults from viewing such speech. Rather, it was an attempt to zone the Internet just as we zone physical environments. According to one supportive brief: "The CDA is simply a zoning ordinance for the Internet, drawn with sensitivity to the constitutional parameters the Court has refined for such regulation. The Act grants categorical defenses to those who reasonably safeguard indecent material from innocent children--who have no constitutional right to see it --channeling such material to zones of the Internet to which adults are welcome but to which minors do not have ready access." 7 Support for the CDA was thin, however, and it was quickly overwhelmed by strident and concerted opposition. An alliance of Internet users, Internet Service Providers (ISPs), and civil libertarian groups challenged the legislation as a blatant violation of the First Amendment right of free speech. This coalition was spearheaded by the American Civil Liberties Union (ACLU) and the case became known as ACLU v. Reno. 4 The plaintiffs argued that because of the way the Internet worked, this law would most likely have the effect of also banning the transmission of "indecent" material to adults. They also contended that the banned speech might cast the net of censorship too far by including works of art and literature and maybe even health related or sex education information. Also, even if the CDA were enacted, it would have minimal impact on the availability of pornography in cyberspace. It could not control sexual content on the Internet originating in other countries, nor could it halt pornography placed on the Internet by anonymous remailers, which are usually located off shore and beyond the pale of U.S. regulators. The bottom line is that because the Internet is a global network, localized content restrictions enacted by a single national government to protect children from indecent material will probably be ineffectual. A panel of federal judges in Philadelphia ruled unanimously that the CDA was a violation of the First and Fifth Amendments. The justice Department appealed the case, which now became known as Reno v. ACLU, but to no avail. The Supreme Court agreed with the lower court's ruling, and in June 1997, it declared that this federal law was unconstitutional. The Court was especially concerned about the vagueness of this contentbased regulation of speech. According to the majority opinion written by Justice Stevens, "We are persuaded that the CDA lacks the precision that the First Amendment requires when a statute regulates the content of speech. In order to deny minors access to potentially harmful speech, the CDA effectively suppresses a large amount of speech that adults have a constitutional right to receive and to address to one another." 8 Stevens also held that the free expression on the Internet is entitled to the highest level of First Amendment protection. This is in contrast to the more limited protections for other more pervasive media such as radio and broadcast and cable television, where the Court has allowed many government imposed restrictions. In making this important distinction, the Court assumes that computer users have to actively seek offensive material, whereas they are more likely to encounter it accidentally on television or radio if it were so available. CDA II Most of those involved in the defeat of the CDA realized that the issue would not soon go away. Congress, still supported by public opinion, was sure to try again. In October 1998, they did try again, passing an omnibus budget package that included the Child Online Protection Act (COPA), a successor to the original CDA, which has become known as CDA II. The law was signed by President Clinton, and like its predecessor, it was immediately challenged by the ACLU. CDA II would make it illegal for the operators of commercial Web sites to make sexually explicit materials harmful to minors available to those younger than seventeen years of age. Commercial Web site operators would be required to collect an identification code, such as a credit card number, as proof of age before allowing viewers access to such material. The ACLU and other opponents claimed that the law would lead to excessive selfcensorship. CDA II would have a negative impact on the ability of these commercial Web sites to reach an adult audience. According to Max Hailperin, "There is no question that the COPA impairs commercial speakers' ability to cheaply, easily, and broadly communicate material to adults that is constitutionally protected as to the adults (nonobscene), though harmful to minors." 9 The law is more narrowly focused than CDA I because it attempts to define objectionable sexual content more carefully. Such content would lack "serious literary, artistic, political or scientific value" for those younger than seventeen years of age. However, the law's critics contend that it is still worded too broadly. Those critics also worry about what 5 will happen if the law is arbitrarily or carelessly applied. For example, would some sites offering sexual education information violate the law? In February 1999, a Philadelphia federal judge issued a preliminary injunction against CDA II, preventing it from going into effect. This judge accepted the argument that the law would lead to selfcensorship and that "such a chilling effect could result in the censoring of constitutionally protected speech, which constitutes an irreparable harm to the plaintiffs." 10 An appeal is considered likely, meaning that the ultimate resolution will have to await the Supreme Court's decision. At the heart of the debate about the CDA and content regulation is the basic question that was raised in Chapter Two about how the Internet should be controlled. Should government impose the kind of central controls embodied in this legislation? Or should the Internet be managed and controlled through a more bottoms up, user oriented approach, with users empowered to develop their own solutions tailored to their own needs and value systems? One advantage of the latter approach is that such controls are more consistent with the Internet's decentralized network architecture. For many users, decentralism in the area of content control seems preferable to formal state regulations. It respects civil liberties and leaves the opportunity for content control in the hands of those most capable of exercising it. However, reliance on a decentralized solution is certainly not without opposition and controversy. If we empower users to control Internet content in some ways, we are still left with many questions. If we assert that the purpose of censoring the Internet is the preservation of the community's values, how do we define community? Also, how do we ascertain what the community's values really are? Finally, can we trust technology to help solve the problem or will it make matters even worse? Automating Content Controls Nonetheless, thanks to the rulings against CDA I and II, the burden of content control is now shifting to parents and local organizations. This communal power has raised some concerns. To what extent should local communities and institutions (such as schools, prisons, libraries, and so on) assume direct responsibility for controlling content on the Internet? Libraries, for example, must consider whether it is appropriate to use filtering software to protect young patrons from pornography on the Internet. Is this a useful and prudent way to uphold local community or institutional standards? Or does this sort of censorship compromise a library's traditional commitment to the free flow of ideas? There are two broad areas of concern about the use of content controls that need elaboration. The first area concerns the ethical probity of censorship itself, even when it is directed at the young. There is a growing tendency to recognize a broad spectrum of rights, even for children, and to criticize parents, educators, and politicians who are more interested in imposing their value systems on others than in protecting vulnerable children. Jonathan Katz and other advocates of children's rights oppose censorship, even within a private household, unless it is part of a mutually agreed upon social contract between the parent and child. According to Katz, "Parents who thoughtlessly ban access to online culture or lyrics they don't like or understand, or who exaggerate and distort the dangers of violent and pornographic imagery; are acting out of arrogance, imposing brute authority." 11 Rather, Katz contends, young people have a right to the culture that they are creating and shaping. The ACLU seems to concur with this position and it too advocates against censorship as a violation of children's rights. 6 Lurking in the background of this debate is the question of whether children have a First Amendment right to access indecent materials. Legal scholars have not reached a consensus about this, but if children do have such a right, it would be much more difficult to justify filtering out indecent materials in libraries or educational institutions. One school of thought about this issue is that a child's free speech rights should be proportionate to his or her age. The older the child, the more problematic are restrictions on indecent material. The second area of concern pertains to the suitability of the blocking methods and other automated controls used to accomplish this censorship. Two basic problems arise with the use of blocking software. The first problem is the unreliability and lack of precision that typifies most of these products; there are no perfect or foolproof devices for filtering out obscene material. Programs like the popular SurfWatch operate by comparing Web site addresses to a list of prohibited sites that are known to contain pornographic material. SurfWatch currently prohibits more than 30,000 Web sites. However, this filtering program is less effective with Usenet newsgroups (electronic bulletin boards or chat rooms). SurfWatch depends on the name of the newsgroup to decide whether it should be banned; thus, an earlier version missed a chat room that displays pornographic material but goes under the name alt.kids talk.penpals. Another problem is that these blocking programs can be used to enforce a code of political correctness unbeknownst to parents or librarians who choose to install them. Sites that discuss AIDS, homosexuality, and related topics are routinely blocked by certain filtering programs. Often, these programs are not explicit or forthright about their blocking criteria, which greatly compounds this problem. More sophisticated filtering mechanisms are appearing in the marketplace, which can obviate some of the precision problems associated with blocking programs. Consider, for example, the rating system, known as PICS (Platform for Internet Content Selection), that is rapidly gaining in popularity. PICS is more efficient and less expensive than blocking software. It is a framework that permits labeling of Internet content. It provides a standard format and supports multiple labeling schemes or rating services. Internet content providers can embed a label within their own Web site, or third parties could rate that Web site independently. In either case, a common labeling vocabulary is available for use. End users surfing the Web can rely on the author's label or the label provided by a third party. In some cases, of course, authors will be disinclined to label their own Web sites. Neo Nazi sites, for example, typically do not have labels embedded within them. On the other hand, the Simon Wiesanthal Center, a nonprofit organization that combats antiSemitism, could rate those Web sites based on the presence of antiSemitic content and hate speech. Labels can be embedded in Web documents or otherwise attached to a Web site or they can be stored in separate server. In the latter case a user could instruct the software to check for the labels on that server before accessing a particular site. Software can be programmed to take action based on a label such as blocking inappropriate, offensive Web sites. If a household wanted to prevent access to hateful, antiSemitic Web sites, it could instruct its Internet browser to check a central server where those sites and other sites would be labeled. Any properly labeled antiSemitic site, such as www.aryannation.org rated by a third party like the Simon Wiesanthal Center would also include an action code blocking access to that site. The use of this labeling infrastructure has already generated significant controversy. PICS certainly has its supporters who argue that this voluntary system is far superior to one imposed by the government. They assert that filtering software devolves responsibility to the level where it should be in a pluralistic society, that is, with parents, schools, and local communities. In contrast, civil libertarians and many responsible professionals strenuously object to the use of rating systems like PICS, claiming that it can transform the Internet into a virtual 7 censorship machine. They worry that because rating is so labor intensive that a few rating systems will dominate and will exclude considerable questionable or controversial material. Restrictions inscribed into computer code end up having the force of law without the checks and balances provided by the legal system. With programs like PICS, we will be handing over regulation of the Internet to private enterprises, which can develop tendentious labeling schemes and thereby use filtering technologies to further their own particular political or social agendas. This is indeed a striking example of how code is becoming a substitute for law as a constraint on cyberspace behavior. Thanks to the nullification of the CDA, Internet stakeholders in increasing numbers will resort to software that may be far more effective than the law in suppressing pornographic material. Although some of the criticism directed at PICS and automated content control is exaggerated, the difficulties identified here should not be underestimated. At the same time, a more imperceptible problem with filtering systems is that they can be used to tailor and personalize one's perception of reality--to control one's environment in a detrimental way that narrows one's perspectives and experience. According to Cass Sunstein, "Each person could design his for her] own communications universe. Each person could see those things that he [or she] wanted to see, and only those things." 12 Finally, a potential disadvantage of PICS is that the filter can be imposed at any level in the vertical hierarchy that controls the accessibility of Internet services. It can be invoked at the individual user level, the corporate or institutional level, the ISP level, or even the state level. It can be used by the Chinese to limit public discourse about democracy just as easily as it can be used by parents to keep pornographic Web sites far from the curious gaze of their children. There is significant opportunity for abuse, making many conscientious stakeholders apprehensive about its adoption. Although we take no position on the merits of PICS, we do contend that users who embrace this method of dealing with cyberporn should deploy this software responsibly to minimize any potential for collateral damage. If this code is designed, developed, and used prudently, we may find that it has the wherewithal to create the desired effect with minimal negative impact on individual liberties or the common good. So, what constitutes responsible use of these automated access codes? Let's suggest a few criteria. First, the use of PICS or other automated content controls should be strictly voluntary --parents or schools should be allowed to choose whether to restrict Web content, while authors can choose whether to label their Web sites. In contrast, a mandatory rating and filtering system administered or sponsored by the government would be problematic and imprudent. It would impose a uniform solution to what has always been regarded as a local problem. Second, a Web site that does choose to use a label must have the integrity to label itself accurately. Third, third parties that rate Web sites must strive to provide fair, accurate, and consistent ratings that are subject to reasonable external scrutiny. They must be flexible enough to judiciously handle appeals from Web sites that maintain that they have been mislabeled. Fourth, there should be an adequate transparency level in blocking software of rating schemes. Although some information may be proprietary, labeling services must be as up front as possible about their labeling philosophy and their general standards of exclusion. CyberSitter, for example, which purports to protect children from pornography, blocks the Web site of the National Organization for Women. Such blocking is irresponsible unless this rating service also has a political agenda that it explicitly reveals to its patrons. Finally, PICS should not be adopted as a high level centralized filtering solution. Filtering should occur only at the 8 lowest levels of the hierarchy. It should not be used by search engines, ISPs, or states to censor the Internet; this is especially harmful if it is done in a surreptitious and dogmatic fashion. Even if automated content controls such as PICS are used responsibly and diligently, their use still raises some troubling questions. Will there be chaos on the Internet as many different private and public groups express opinions about Web sites in the form of content labels? Should there be any restrictions on the provision of such labels? But aren't restrictions on content labels tantamount to restrictions on free speech? And which local institutions should assume the burden of implementing filtering technologies? We cannot consider all of these question here, but the complex issues involved in the last question clearly emerge in the controversial debate about the use of filtering devices in libraries. Both public and private libraries face a real dilemma: they can either allow unfettered Internet access even to their youngest patrons or use filtering products to protect minors from pornographic material. Libraries that favor the first approach argue that the use of filtering devices compromises the library's traditional commitment to the free flow of information and ideas. Some of this opposition to these filtering devices originates from the imprecise way in which they function. The public library in New York City subscribes to this philosophy and presently does not use filtering devices. Furthermore, the American Library Association (ALA) is opposed to the installation of filters and endorses the idea of unrestricted Internet access for both adults and minors. Some librarians, however, disagree with the ALA. They maintain that the Internet should be censored and that filtering programs provide a way to support and reinforce local community values. According to Brenda Branch, the director of the Austin Public Library in Texas, "We have a responsibility to uphold the community standard. . . . We do not put pornographic material in our book collection or video collection, and I also don't feel we should allow pornographic materials in over the Internet." 13 In Loudon County, Virginia, the public library decided (after some soul searching) to install XStop, which blocks access to a list of predetermined pornographic Web sites. In response, the ACLU sued the library on behalf of eight plaintiffs whose Web sites were blocked by XStop. According to the ACLU, blocking these sites violates the right to free speech and is akin to banning books. This suit has been regarded by many as a key test of the legitimacy of constraining one's freedom to use the Internet. Everyone recognizes the novelty of cases such as Loudon County, as the legal system struggles to find the most appropriate analogy. Opponents of filtering, for example, argue that blocking Internet sites is analogous to the library's purchase of an encyclopedia and the deletion of certain articles that do not meet its decency standard. The other side contends that access to a Web site is more akin to a request for an interlibrary loan, which the library is not required to satisfy. The case went through several stages, and in November 1998, a federal judge sided with the ACLU, ruling that the libraries' policy of using filtering software on all of its computers "offends the guarantee of free speech in the First Amendment." There is little doubt that this decision will be a critical precedent and will probably make most libraries less likely to rely on filters. One compromise and common sense position used by the Boston Public Library is the installation of filtering devices on children's computers but not on those in the adult areas. Still, the ALA and the ACLU do not favor this type of zoning approach. As the result of an ACLU lawsuit, the library system in Kern County, California, was forced to abandon such a zoning plan and to give all of its patrons, including minors, the right to use a computer without a filter. 9 Moreover, this solution contradicts Article 5 of the ALA's Library Bill of Rights: "A person's right to use a library should not be denied or abridged because of origin, age, background, or views." 14 According to the ALA, this article precludes the use of filters on any computer systems within a library. How should these nettling matters be resolved? Let's assume for the sake of argument that filtering devices and systems (like PICS) do become more precise and accurate. If filtering is more dependable and blocking criteria more transparent, should libraries and other institutions give priority to the value of free expression and the free flow of ideas and information no matter how distasteful some of that information is or do they give priority to other community values at the expense of the unimpeded flow of information? By following the first option and not regulating the Internet at the local level, we are giving the First Amendment its due letting all voices be heard, even those that are sometimes rancorous and obscene. One can base this decision on several principles: the rights of children to access indecent material, the notion that censorship should not replace the cultivation of trust, and the education of individuals to act guardedly in cyberspace. Moreover, the occasional abuse of the Internet in a school or library setting should not be a reason to censor the entire network. Censorship is a disproportionate response to isolated incidents of abuse. The argument for reliance on education and trust to solve this problem is a compelling one. Shouldn't schools and libraries attempt to educate students and young patrons about Internet use and abuse? But as Richard Rosenberg argues, "if the first instinct is to withhold, to restrict, to prevent access, what is the message being promulgated?" 15 If institutions such as schools and libraries truly value the ideals of trust, openness, and freedom, imposing censorship on information is a bad idea that mocks those ideals. Also, wouldn't such restrictions start us down a dangerous slide to more pernicious forms of censorship and repression? How and where do we draw the line once we begin to restrict access to Internet content? As a result, many free speech proponents argue that this global medium of expression deserves the highest level of protection a pluralistic society and its institutions can possibly offer. Many other compelling and persuasive arguments can be made for keeping the Internet a free and open medium of exchange. There is something satisfying about the Chinese government's to impotence completely control free expression in this medium as they now control other forms of political dissent. The Internet can thereby become a wonderful vehicle for spreading the ideals of democracy. It is surely not the ally of tyrants or the enemies of democracy. But should all information be freely accessible to anyone who wants it? Is this a rational, morally acceptable, and prudent policy? What are the costs of living in a society that virtually absolutizes the right to free speech in cyberspace and makes all forms of speech readily available even to its youngest members? Because these costs can be high, it is critically important to consider the other side of this issue. Many responsible moralists contend that some carefully formulated, narrow restrictions on specific types of indecent speech are perfectly appropriate when young children are involved. They maintain that parents, schools, libraries, and other local institutions have an obligation to promote and safeguard their own values as well as the values of their respective communities. This is part of the more general obligation to help promote public morality and the public order. Freedom and free expression are critically important human rights, but these and other rights can be reasonably exercised only in a context of mutual respect and common acceptance of certain moral norms, which are often called the public morality. In any civilized society, some of these norms entail sexual behavior and especially the sexual behavior of and 10 toward children. Given the power of sexuality in one's life, the need for carefully integrating sexuality into one's personality, and the unfortunate tendency to regard others as sexual objects of desire (rather than as human beings), there is a convincing reason for fostering a climate in which impressionable children can be raised and nurtured without being subjected to images of gross or violent sexual conduct that totally depersonalize sexuality; exalt deviant sexual behavior, and thereby distort the view of responsible sexual behavior. This is clearly an aspect of the common good and public morality and is recognized as such by public officials in diverse societies who have crafted many laws (such as the law against the production of child pornography) to protect minors and to limit the exercise of rights in this area. Hence, given the importance of protecting young children as best as we can from psychologically harmful pornographic images, parents and those institutions that function in loco parentis should not be timid about carefully controlling Internet content when necessary. 16 It is never easy to advocate censorship at any level of society precisely because the right to free expression is so valuable and cherished. However, proponents of automated content controls argue that all human rights, including the right to free expression, are limited by each other and by other aspects of the common good, which can be called public morality. According to this perspective, parents and schools are acting prudently when they choose to responsibly implement filtering technologies to help preserve and promote the values of respect for others and appropriate sexual conduct that are part of our public morality. Preserving free speech and dealing with sexually explicit material will always be a problem in a free and pluralistic society, and this is one way of achieving a proper balance when the psychological health of young children is at stake. Other Forms of Problematic Speech Hate Speech The rapid expansion of hate speech on the Web raises similar problems and controversies. Many groups, such as white supremacists and anarchists, have Web sites that advocate their particular point of view. Some of these sites are blatantly anti Semitic, whereas others are dominated by Holocaust revisionists who claim that the Holocaust never happened. On occasion, these sites can be especially virulent and outrageous, such as the Web site of the Charlemagne Hammerskins. The first scene reveals a man disguised in a ski mask who is bearing a gun and standing next to a swastika. The site has this ominous warning for its visitors: "Be assured, we still have oneway tickets to Auschwitz." Some hate Web sites take the form of computer games, such as Doom and Castle Wolfenstein, which have been constructed to include AfricanAmericans, Jews, or homosexuals as targets of violence. In one animated game, the Dancing Baby, which became a popular television phenomenon, has been depicted as the "white power baby." In the United States, the most widely publicized of these hate speech sites are those that attack doctors who perform abortions. Some of these sites are especially menacing and venomous, such as "The Nuremberg Files," which features a "Wanted" list of abortion doctors. The site's authors contend that they are not advocating violence but only expressing their opinion, albeit in a graphic format. What can be done about this growing subculture of hate on the Internet? The great danger is that the message of hate and bigotry, once confined to reclusive, powerless groups, can now be spread more efficiently in cyberspace. Unlike obscenity and libel, hate speech is not illegal 11 under U.S. federal law and is fully protected by the First Amendment. Even speech that incites hatred of a particular group is legally acceptable. The only exception to this is the use of "fighting words," which were declared beyond the purview of the First Amendment by the Supreme Court. Such speech, however, must threaten a clear and present danger. In the controversial case of the antiabortion Web sites, a federal court recently ruled that the site's content was too intimidating and hence was not protected by the First Amendment. But in general, censorship of online hate speech is inconsistent with the First Amendment. On the other hand, in European countries like Germany and France, antiSemitic, Nazi oriented Web sites are illegal. In Germany, the government has required ISPs to eliminate these sites under the threat of prosecution. Critics of this approach argue that it is beyond the capability of ISPs to control content in such a vast region as the World Wide Web. It is also illegal for Internet companies to ship Nazi materials into Germany. This means that Amazon.com should not be selling books like Hitler's Mein Kampf to its German customers, although this restriction too will be difficult to enforce. Although government regulation and explicit laws about hate speech are suitable for some countries, an alternative to government regulation is once again reliance on user empowerment and responsible filtering that does not erroneously exclude legitimate political speech. Parents and certain private and religious institutions might want to seize the initiative to shield young children and sensitive individuals from some of this material such as virulent antiSemitism. However, even more caution must be exercised in this case because the distinction between hate speech and unpopular or unorthodox political opinion is sometimes difficult to make. A rule of thumb is that hate speech Web sites are those that attack, insult, and demean whole segments of the population, such as Jews, Italians, African Americans, whites, homosexuals, and so forth. Many sites will fall in a nebulous gray area, and this will call for conscientiousness and discretion on the part of those charged with labeling those sites. Anonymous Speech Anonymous communication in cyberspace is enabled largely through the use of anonymous remailers, which strip off the identifying information on an email message and substitute an anonymous code or a random number. By encrypting a message and then routing that message through a series of anonymous remailers, a user can rest assured that his or her message will remain anonymous and confidential. This process is called chained remailing. The process is effective because none of the remailers will have the key to read the encrypted message; neither the recipient nor any remailers (except the first) in the chain can identify the sender; the recipient cannot connect the sender to the message unless every single remailer in the chain cooperates. This would assume that each remailer kept a log of their incoming and outgoing mail, which is highly unlikely. According to Michael Froomkin, this technique of chained remailing is about as close as we can come on the Internet to "untraceable anonymity," that is, "a communication for which the author is simply not identifiable at all." 17 If someone clandestinely leaves a bunch of political pamphlets in the town square with no identifying marks or signatures, that communication is also characterized by untraceable anonymity. In cyberspace, things are a bit more complicated, and even the method of chained remailing is not foolproof: if the anonymous remailers do join together in some sort of conspiracy to reveal someone's identity, there is not much anyone can do to safeguard anonymity. Do we really need to ensure that digital anonymity is preserved, especially since it is so often a shield for subversive activities? It would be difficult to argue convincingly that 12 anonymity is a core human good, utterly indispensable for human flourishing and happiness. One can surely conceive of people and societies where anonymity is not a factor for their happiness. However, although anonymity may not be a primary good, it is surely a secondary one because for some people in some circumstances, a measure of anonymity is important for the exercise of their rational life plan and for human flourishing. The proper exercise of freedom, and especially free expression, requires the support of anonymity in some situations. Unless the speaker or author can choose to remain anonymous, opportunities for free expression become limited for various reasons and that individual may be forced to remain mute on critical matters. Thus, without the benefit of anonymity, the value of freedom is constrained. We can point to many specific examples in support of the argument that anonymous free expression deserves protection. Social intolerance may require some individuals to rely on anonymity to communicate openly about an embarrassing medical condition or an awkward disability. Whistleblowers may be understandably reluctant to come forward with valuable information unless they can remain anonymous. And political dissent even in a democratic society that prizes free speech may be impeded unless it can be done anonymously Anonymity then has an incontestable value in the struggle against repression and even against more routine corporate and government abuses of power. In the conflict in Kosovo, for example, some individuals relied on anonymous programs (such as anonymizer.com) to describe atrocities perpetrated against ethnic Albanians. If the Serbians were able to trace the identity of these individuals, their lives would have been in grave danger. Thus, although there is a cost to preserving anonymity, its central importance in human affairs is certainly beyond dispute. It is a positive good; that is, it possesses positive qualities that render it worthy to be valued. At a minimum, it is valued as an instrumental good, as a means of achieving the full actualization of free expression. Anonymous communication, of course, whether facilitated by remailers or by other means, does have its drawbacks. It can be abused by criminals or terrorists seeking to communicate anonymously to plot their crimes. It also permits cowardly users to communicate without civility or to libel someone without accountability and with little likelihood of apprehension by law enforcement authorities. Anonymity can also be useful for revealing trade secrets or violating other intellectual property laws. In general, secrecy and anonymity are not beneficial for society if they are over used or used improperly. According to David Brin, "anonymity is the darkness behind which most miscreants--from mere troublemakers all the way to mass murderers and would be tyrants--shelter in order to wreak harm, safe against discovery or redress by those they abuse." 18 Although we admit that too much secrecy is problematic, the answer is not to eliminate all secrecy and make everything public and transparent, which could be the inevitable result of this loss of digital anonymity. Nonetheless, it cannot be denied that anonymity has its disadvantages and that digital anonymity and an unfettered Internet can be exploited for many forms of mischief. Therefore, governments are tempted to sanction the deployment of architectures that will make Internet users more accountable and less able to hide behind the shield of anonymity. Despite the potential for abuse, however, there are cogent reasons for eschewing the adoption of those architectures and protecting the right to anonymous free speech. A strong case can be put forth that the costs of banning anonymous speech in cyberspace are simply too high in an open and democratic society. The loss of anonymity may very well diminish the power of that voice that now resonates so loudly in cyberspace. As a result, regulators must proceed with great caution in this area. 13 Student Web Sites At Westlake High School in Ohio, a student, Sean O'Brien, felt that he was being unfairly treated by one of his teachers. His response was to create a home Web page that included a photograph of his music teacher, who was described as "an overweight middle age man who doesn't like to get haircuts." The High School was outraged and promptly took action. It suspended O'Brien for ten days, ordered him to delete the Web site, and threatened his expulsion if he failed to comply. His parents filed suit against the school district, claiming that this order infringed on their son's right to free speech. The central question in the case revolves around the school's right to discipline a student for the contents of a personal Web site. According to the ACLU and other legal scholars, who supported O'Brien's lawsuit, a school's efforts to exercise control of home Web sites, what students say outside of school, no matter how outrageous it may be, seems inconsistent with the First Amendment right to free expression. According to this view, students have every right to use the Internet to criticize their schools or their teachers. The legal precedent on the issue is somewhat ambiguous. The U.S. Supreme Court has recognized three types of control over student speech. First, schools can control the content of student newspapers or other student publications such as those associated with extracurricular activities. Second, they can control and seek to curtail profane speech that occurs within the school. Third, they can regulate offcampus speech if that speech causes a "material and substantial" disruption of the school's classroom activities. The third criteria is obviously the only one that may be apposite in this case. Does O'Brien's criticism of his teacher constitute a material disruption? A marginal case can be made perhaps that because the site was read by many of O'Brien's classmates, the music teacher's class was "disrupted." However, embarrassing remarks aimed at teachers are probably not what the Supreme Court had in mind. The disruptive activity would have to be much more serious to warrant censorship of what a student says outside of the classroom. In the O'Brien case, an out ofcourt settlement was reached in April 1998, in which the O'Brien family was awarded $30,000 in damages. He also received an apology from the school district, which promptly reinstated him at Westlake High in good standing. The problem of controversial home Web sites will only get worse and may be a moderate but necessary price to pay for the information egalitarianism afforded to all computer users by the Internet. Schools must find a way to discourage student Web sites that mock teachers or indulge in profane insults by means other than censorship. A good starting point is a continued emphasis on the value of decent and civil speech in the realm of cyberspace. Spam as Commercial Free Speech Spam refers to unsolicited, promotional email, usually sent in bulk to thousands or millions of Internet users. Quite simply, it is junk email that is usually a significant annoyance to its recipients. The major difference between electronic junk mail and paper junk mail is that the per copy cost of sending the former is so much Iower. There are paper, printing, and postage charges for each piece of regular junk mail, but the marginal cost of sending an additional piece of junk email is negligible. For example, some direct marketers who specializes in spam charge their clients a fee as low as $400 to send out several million messages. But spam is not cost free. The problem is that the lion's share of these costs are externalities, that is, they are costs borne involuntarily by others. As Robert Raisch has observed, spam is "postage due marketing." 19 The biggest cost associated with spam is the consumption of 14 computer resources. For example, when someone sends out spam the messages must sit on a disk somewhere, and this means that valuable disk space is being filled with unwanted mail. Also, many users must pay for each message received or for each disk block used. Others pay for the time they are connected to the Internet, time that can be wasted downloading and deleting Spam. As the volume of spam grows and commercial use of the Internet expands, these costs will continue their steady increase. Furthermore, when spam is sent through ISPs they must bear the costs of delivery. This amounts to wasted network bandwidth and the use of system resources such as disk storage space along with the servers and transfer networks involved in the transmission process. In addition to these technical costs, there are also administrative costs. Users who receive these unwanted messages are forced to waste time reading and deleting them. If a vendor sends out 6 million messages and it takes 6 seconds to delete each one, the total cost of this one mailing is 10,000 person hours of lost time. Purveyors of spam contend that this is simply another form of commercial free speech that deserves the same level of First Amendment protection as traditional advertising. They point out, perhaps correctly, that a ban on spam would be not only impractical but also unconstitutional because it would violate their constitutional right to communicate. The right to commercial forms of speech has stood on tenuous ground and has never been seen as legally or morally equivalent to political speech. In recent years, however, the Court has tended to offer more substantial protection for commercial speech than it did several decades ago. According to Michael Carroll, "With the development of our information economy, the Court has come to read the First Amendment to provide broader protection over the nexus between the marketplace of ideas and the marketplace for goods and services." 20 The potential violation of free speech rights by those who want to suppress spam is further complicated by the difficulty of deciding which communications should be classified as "spam," that is, as junk email. Consider the controversial case of Intel Corporation v. Hamidi. Mr. Hamidi, a former Intel employee, was issued an injunction barring him from sending email to Intel employees connected to the company's network. Hamidi's mail consisted of protests and complaints about Intel's poor treatment of its employees. Intel maintained, and a court agreed, that Hamidi's mass mailings were equivalent to junk commercial email that disrupted its operations and distracted its employees. What makes this case difficult is the fact that Hamidi's speech was noncommercial. He was not advertising a product but rendering an opinion, however alien that opinion might have been in the Intel work environment. A similar incident arose at a Pratt & Whitney factory in Florida, where a union organizing drive used email to contact the company's 2,000 engineers to solicit their interest in joining the union, According to Noam Cohen, unions have found email to be "an unusually effective organizing tool, one that combines the intimacy of a conversation, the efficiency of mass produced leaflets and the precision of delivery by mail to work forces that are often widely dispersed." 21 But companies like Pratt & Whitney argue that these intrusive mass mailings are the same as spam and must be suppressed to avoid the negative effects of spam, such as congestion of their networks. These and other cases suggest some provocative free speech questions. Should all bulk email, even noncommercial communications, be considered spam? If the Internet is to realize its full potential as a "democratizing force," shouldn't some forms of bulk email be permitted, both morally and legally? What should be the decisive factors in determining when bulk email is intrusive spam or a legitimate form of communication? What can be done about bulk email that is classified as spam? Should it be subject to government regulations because of its deleterious side effects? Some regulatory possibilities 15 include an outright ban on spam or a labeling requirement. The first option could be implemented by amending the Telephone Consumer Protection Act of 1991 (TCPA), which already makes it illegal to transmit unsolicited commercial advertisements over a facsimile machine. The TCPA could be modified to include unsolicited commercial email as well as junk faxes. However, there would most likely be a constitutional challenge to a complete ban on spam because it appears to violate the First Amendment. Also, for those who want to preserve the Internet's libertarian ethic, it is unsettling to proscribe communications such as email based purely on its content. The second option is a labeling requirement. All unsolicited commercial email and Internet advertising would have a common identifier or a label, allowing users to filter it out if they so desired. With accurate labels, ISPs could more easily control incoming spam, either by preventing any unsolicited advertising from their networks or by allowing those ads to reach only the destinations that have agreed to accept such emails. Critics of the latter approach argue that if a labeling requirement were enacted, it would implicitly legitimize spam, and this could have the perverse effect of actually increasing its volume. Spam might become a more acceptable way of advertising, and this could increase the burden on consumers and ISPs to filter out even more unwanted junk email. Another solution to the problem of spam for those who oppose regulations and prefer a more bottoms up approach is exclusive reliance on code without the support of the law. Filters are now available that will weed out spam while allowing legitimate mail to come through, even if spam is not appropriately labeled. Crude email filters that look for signs of spam such as messages that contain words like "Free!" have been on the market for some time. More sophisticated filters that distinguish junk mail from real mail are also being developed. Microsoft, for example, has developed a filter that relies on a multidimensional vector space to identify junk mail. This filter examines many more variables than ordinary ones. As a result, when the Microsoft filter is deployed, "it takes a constellation of symptoms to trigger the diagnosis of spam--some having to do with the words in a message and some with its appearance (for example, a high percentage of characters like! and $$$)." 22 Once again, we are confronted with a choice between top down regulations or a bottoms up approach with fallible, yet effective, antispam technology. Of course, the same dangers that accompany the filtering of pornography could be applicable when filtering spam. Filtering protocols, even those that are well intentioned, come with a cost. As David Shapiro observes, excessive filtering "may cause our preferences to become ever more narrow and specialized, depriving us of a broad perspective when we likely need it most." 23 PostScript Spam, pornography, libel, hate speech all are problematic forms of free expression that pose formidable challenges to cyberspace jurisprudence, which seeks to balance individual rights with the public good. Ideally, of course, individuals and organizations should regulate their own expression by refraining from hate speech, refusing to disseminate pornography to children, and repressing the temptation to use spam as a means of advertising goods or services. In the absence of such selfrestraint, Internet stakeholders must make difficult decisions about whether to shield themselves from unwanted speech, whether it be crude obscenities or irksome junk email. Topdown government regulations such as the CDA II or laws that ban junk email represent one method for solving this problem. Sophisticated filtering devices, which will 16 undoubtedly continue to improve in their precision and accuracy, offer a different, but more chaotic, alternative. As we have been at pains to insist here, whatever combination of constraints are used code, law, market, or norms full respect must be accorded to key moral values such as personal autonomy; hence the need for nuanced ethical reflection about how these universal moral standards can best be preserved as we develop effective constraints for aberrant behavior in cyberspace. Otherwise, our worst apprehensions about the tyranny of the code or the laws of cyberspace may be realized. Another option, of course, is to refrain from the temptation to take any action against these controversial forms of speech in cyberspace. Some civil libertarians convincingly argue that Internet stakeholders should eschew regulations and filtering and leave the Internet as unfettered as possible. We should tolerate nuisance speech on the Internet just as we tolerate it in the physical world. Discussion Questions 1. What is your assessment of CDA II? Do you support the ACLU's views against this legislation? 2. Are automated content controls such as PICS a reasonable means of dealing with pornographic material on the Internet? At what level(s)--parents, schools/libraries, ISPs, etc.--should it occur? 3. What sort of First Amendment protection do Web sites filled with hate speech or racist speech deserve? 4. Do you agree With the position that anonymity should he preserved in cyberspace? Or should every user's digital identity be mandated in some way? CASE STUDY The Librarian's Dilemma (Hypothetical) Assume that you have just taken over as the head librarian of a library system in a medium size city in the United States. You discover that the main library building in the heavily populated downtown area has six Macintosh computers, but they are used only sporadically by this library's many patrons. The computers lack any interesting software and do not have Internet connectivity. As one of your first orders of business, you decide to purchase some popular software packages and to provide Internet access through Netscape's Navigator browser. The computer room soon becomes a big success. The computers are in constant use, and the most popular activity is Web surfing. You are pleased with this decision because this is an excellent way for those in the community who cannot afford computer systems to gain access to the Internet. Soon, however, some problems begin to emerge. On one occasion, some young teenagers (probably about twelve or thirteen years old) are seen downloading graphic sexual material. A shocked staff member tells you that these young boys were looking at sadistic obscene images when they were asked to leave the library. About ten days later, an older man was noticed looking at child pornography for several hours. Every few weeks, there are similar incidents. Your associate librarian and several other staff members recommend that you purchase and immediately install some type of filtering software. Other librarians remind you that this violates the ALA's code of responsibility. You reread that code and are struck by the following 17 sentence: "The selection and development of library resources should not be diluted because of minors having the same access to library resources as adult users." They urge you to resist the temptation to filter, an activity they equate with censorship. One staff member argues that filtering is equivalent to purchasing an encyclopedia and cutting out articles that do not meet certain standards. Another librarian points out that the library does not put pornographic material in its collection, so why should it allow access to such material on the Internet? As word spreads about this problem, there is also incipient public pressure from community leaders to do something about these computers. Even the mayor has weighed in-- she too is uncomfortable with unfettered access. What should you do? Questions 1. Is filtering of pornographic Web sites an acquisition decision or does it represent an attempt to censor the library's collection? 2. Do libraries have any legal and / or moral duty to protect children from in decent and obscene material? 3. What course of action would you take? Defend your position. CASE STUDY Spam or Free Speech at Intel? Mr. Kenneth Hamidi is a disgruntled, former employee of Intel who has problems with the way Intel treats its workers. Hamidi is the founder and spokesperson of an organization known as FACE, a group of current and former Intel employees, many of whom claim that they have been mistreated by Intel. Hamidi was dismissed from Intel for reasons that have not been made public, but he claims to be a victim of discrimination. Shortly after his dismissal in the fall of 1996, Hamidi began emailing Intel employees, informing them of Intel's unfair labor practices. He alleges that the company is guilty of widespread age and disability discrimination, but Intel firmly denies this allegation. According to Intel, Hamidi sent about 30,000 email messages complaining about Intel's employment policies between 1996 and 1998. One message, for example, accused Intel of grossly underestimating the size of an impending layoff. Intel's position was that Hamidi's bulk email was the equivalent of spam, congesting its email network and distracting its employees. Intel's lawyers have contended that these unsolicited mailings were intrusive and costly for the corporation. Moreover, the unwanted messages are analogous to trespass on Intel's property: just as a trespasser forces his or her way onto someone's else's property so these messages were being forced upon Intel and its employees. In summary, their basic argument is that Hamidi does not have a right to express his personal views on Intel's proprietary email system. They also point out that Hamidi has many other forums to express his opinions, such as the FACE Web site. In November 1998, a California Superior Court judge agreed with these arguments and issued an injunction prohibiting Hamidi from sending any more bulk email to Intel's employees. Defenders of Hamidi's actions argue that the injunction is an unfair overreaction and that his free speech rights are being violated. They claim that this bulk email should not be categorized as spam because it took the form of noncommercial speech, which deserves full First Amendment protection. Hamidi's speech involves ideas; it is not an attempt to sell goods 18 or services over the Internet. Hamidi, therefore, has a First Amendment right to disseminate his email messages to Intel's employees, even if the company is inconvenienced in the process. Questions 1. Does Hamidi's speech deserve First Amendment protection? Should he be allowed to send these messages without court interference? 2. What do you make of Intel's argument that its censoring of Hamidi's bulk email amounts to protecting its private property? 3. Should there be new laws to clarify this issue? How might those laws be crafted? References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. Godwin, M. 1998. CyberRights. New York: Random House, p. 16. ACLU v. Reno, 521 U.S., 870 (1997). Ginsberg v. New York, 390 U.S. 15 (1973). Elmer Dewitt, P. 1995. Cyberporn. Time, July 3, p. 40. Godwin, M. p. 223. See Communications Decency Act, 47 U.S.C. # 223 (d) (1) (B). Zittrain et al. Brief for Appelants. Reno v. ACLU, no. 96511. ACLU v. Reno, 882. Halperin, M. 1999. The COPA battle and the future of free speech. Communications of the ACM 42(1):25. Mendels, P. 1999. Setback for a law shielding minors from smut Web sites. The New York Times, February 2, p. A10. Katz, J. 1997. Virtuous reality. New York: Random House, p. 184. Sunstein, C. 1995. The First Amendment in cyberspace. Yale Law Journal 104:1757. Quoted in Harmon, A. 1997. To screen or not to screen: Libraries confront Internet access. The New York Times, June 23, p. D8. See the American Library Association Web site, www.ala.org. Rosenberg, R. 1993. Free speech, pornography, sexual harassment, and electronic networks. The Information Society 9:289. See John Finnis' (1980) insightful discussion of these issues in Natural law and natural rights. Oxford: Oxford University Press, pp. 216218. Froomkin, M. 1996. Flood control on the information ocean: Living with anonymity, digital cash, and distributed data bases. University of Pittsburgh Journal of Law and Commerce 395:278. Brin, D. 1998. The transparent society. Reading, MA: Addison Wesley, p. 27. Raisch, R. Postage due marketing: An Internet company white paper. Available at http: / / w w w.internet.com:2010/marketing / postage.html. Carroll, M. 1996. Garbage in: Emerging media and regulation of unsolicited commercial solicitations. Berkeley Technology Law Journal 11 (Fall). Cohen, N. 1999. Corporations battling to bar use of email for unions. The New York Times, August 23, p. C1. Baldwin, W. 1998. Spam killers. Forbes, September 21, p. 255. Shapiro, p. 114.
MOST POPULAR MATERIALS FROM EPS 304
MOST POPULAR MATERIALS FROM EPS
MOST POPULAR MATERIALS FROM University of Illinois, Urbana Champaign