concurring in part, dissenting in part:
I respectfully dissent from the majority’s analysis of the statutory immunity from libel suits created by § 230 of the Communications Decency Act (CDA).1 The majority gives the phrase “information provided by another” an incorrect and unworkable meaning that extends CDA immunity far beyond what Congress intended. Under the majority’s interpretation of § 230, many persons who intentionally spread vicious falsehoods on the Internet will be immune from suit. This sweeping preemption of valid state libel laws is not necessary to promote Internet use and is not what Congress had in mind.
*1037Congress in 1996 was worried that excessive state-law libel lawsuits would threaten the growth of the Internet. Congress enacted the CDA, which immunizes “provider[s] or user[s]” of “interactive computer service[s]” from civil liability for material disseminated by them but “provided by another information content provider.” 47 U.S.C. § 230(c). Under the CDA, courts must treat providers or users of interactive computer services differently from other information providers, such as newspapers, magazines, or television and radio stations, all of which may be held liable for publishing or distributing obscene or defamatory material written or prepared by others. Congress believed this special treatment would “promote the continued development of the Internet and other interactive computer services” and “preserve the vibrant and competitive free market” for such services, largely “unfettered by Federal or State regulation.” 47 U.S.C. § 230(b)(1)-(2).
The statute states:
No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
47 U.S.C. § 230(c)(1). Three elements are thus required for § 230 immunity: (1) the defendant must be a provider or user of an “interactive computer service”; (2) the asserted claims must treat the defendant as a publisher or speaker of information; and (3) the challenged communication must be “information provided by another information content provider.”2 The majority and I agree on the importance of the CDA and on the proper interpretation of the first and second elements. We disagree only over the third element.3
The majority holds that information is “provided by another” when “a third person or entity that created or developed the information in question furnished it to the provider or user under circumstances in which a reasonable person in the position of the service provider or user would conclude that the information was provided for publication on the Internet or other ‘interactive computer service.’ ” Supra at 1034. In other words, whether information is “provided” depends on the defendant’s perception of the author’s intention. Nothing in the statutory language suggests that “provided” should be interpreted in this convoluted and unworkable fashion.
Under the majority’s rule, a court determining whether to extend CDA immunity to a defendant must determine whether the author of allegedly defamatory information — a person who often will be beyond reach of the court’s process or, worse, unknown — intended that the information *1038be distributed on the Internet. In many cases, the author’s intention may not be discernable from the face of the defamatory communication. Even people who want an e-mail message widely disseminated may not preface the message with words such as “Please pass it on.” Moreover, the fact-intensive question of the author’s intent is particularly unsuited for a judge’s determination before trial, when the immunity question will most often arise.
The majority’s rule will be incomprehensible to most citizens, who will be unable to plan their own conduct mindful of the law’s requirements. Laypersons may not grasp that their tort liability depends on whether they reasonably should have known that the author of a particular communication intended that it be distributed on the Internet. Laypersons certainly will not grasp why this should be the case, as a matter of justice, morality, or politics. Those who receive a potentially libelous email message from another person would seldom wonder, when deciding whether to forward the message to others, “Did the author of this defamatory information intend that it be distributed on the Internet?”4 However, those who receive a potentially libelous e-mail almost certainly would wonder, “Is it appropriate for me to spread this defamatory message?” By shifting its inquiry away from the defendant’s conduct, the majority has crafted a rule that encourages the casual spread of harmful lies. The majority has improvidently crafted a rule that is foreign to the statutory text and foreign to human experience.
The majority rule licenses professional rumor-mongers and gossip-hounds to spread false and hurtful information with impunity. So long as the defamatory information was written by a person who wanted the information to be spread on the Internet (in other words, a person with an axe to grind), the rumormonger’s injurious conduct is beyond legal redress. Nothing in the CDA’s text or legislative history suggests that Congress intended CD A immunity to extend so far. Nothing in the text, legislative history, or human experience would lead me to accept the notion that Congress in § 280 intended to immunize users or providers of interactive computer services who, by their discretionary decisions to spread particular communications, cause trickles- of defamation to swell into rivers of harm.
The problems caused by the majority’s rule all would vanish if we focused our inquiry not on the author’s intent, but on the defendant’s acts, as I believe Congress intended. We should hold that the CDA immunizes a defendant only when the defendant took no active role in selecting the questionable information for publication. If the defendant took an active role in selecting information for publication, the information is no longer “information provided by another” within the meaning of § 230. We should draw this conclusion from the statute’s text and purposes.
A person’s decision to select particular information for distribution on the Internet changes that information in a subtle but important way: it adds the person’s imprimatur to it. The recipient of information that has been selected by another person for distribution understands that the information has been deemed worthy of dissemination by the sender. Information that bears such an implicit endorsement5 is no longer merely the “information *1039provided by” the original sender. 47 U.S.C. § 280(c)(1). It is information transformed. It is information bolstered, strengthened to do more harm if it is wrongful. A defendant who has actively selected libelous information for distribution thus should not be entitled to CDA immunity for disseminating “information provided by another.”
My interpretation of § 280 is consistent with the CDA’s legislative history. Congress understood that entities that facilitate communication on the Internet— particularly entities that operate e-mail networks, “chat rooms,” “bulletin boards,” and “listservs” — have special needs. The amount of information communicated through such services is staggering. Millions of communications are sent daily. It would be impossible to screen all such communications for libelous or offensive content. Faced with potential liability for each message republished by their services, interactive computer service users and providers might choose to restrict severely the number and type of messages posted. The threat of tort liability in an area of such prolific speech would have an obvious chilling effect on free speech and would hamper the new medium.
These policy concerns have force when a potential defendant uses or provides technology that enables others to disseminate information directly without intervening human action. These policy concerns lack force when a potential defendant does not offer users this power of direct transmission. If a potential defendant employs a person to screen communications to select some of them for dissemination, it is not impossible (or even difficult) for that person to screen communications for defamatory content. Immunizing that person or the person’s employer from liability would not advance Congress’s goal of protecting those in need of protection.
If a person is charged with screening all communications to select some for dissemination, that person can decide not to disseminate a potentially offensive communication. Or that person can undertake some reasonable investigation. Such a process would be relatively inexpensive and would reduce the serious social costs caused by the spread of offensive and defamatory communications.
Under my interpretation of § 230, a company, that operates an e-mail network would be immune from libel suits' arising out of e-mail messages transmitted automatically across its network. Similarly, the owner, operator, organizer, or moderator of an Internet bulletin board, chat room, or listserv would be immune from libel suits arising out of messages distributed using that technology, provided that the person does not actively select particular messages for publication.
On the other hand, a person who receives a libelous communication and makes the decision to disseminate that messages to others — whether via e-mail, a bulletin board, a chat room, or a listserv — would not be immune.
My approach also would further Congress’s goal of encouraging “self-policing” on the Internet. Congress decided to immunize from liability those who publish material on the Internet, so long as they do not actively select defamatory or offensive material for distribution. As a result, those who remove all or part of an offensive information posted on (for example) an Internet bulletin board are immune *1040from suit.6 Those who employ blocking or filtering technologies that allow readers to avoid obscene or offensive materials also are immune from suit.
On the other hand, Congress decided not to immunize those who actively select defamatory or offensive information for distribution on the Internet. Congress thereby ensured that users and providers of interactive computer services would have an incentive not to spread harmful gossip and lies intentionally.
Congress wanted to ensure that excessive government regulation did not slow America’s expansion into the exciting new frontier of the Internet. But Congress did not want this new frontier to be like the Old West: a lawless zone governed by retribution and mob justice. The CDA does not license anarchy. A person’s decision to disseminate the rankest rumor or most blatant falsehood should not escape legal redress merely because the person chose to disseminate it through the Internet rather than through some other medium. A proper analysis of § 230, which makes a human being’s decision to disseminate a particular communication the touchstone of CDA immunity, reconciles Congress’s intent to deregulate the Internet with Congress’s recognition that certain beneficial technologies, which promote efficient global communication and advance values enshrined in our First Amendment, are unique to the Internet and need special protection. Congress wanted to preserve the Internet and aid its growth, but not at all costs. Congress did not want to remove incentives for people armed with the power of the Internet to act with reasonable care to avoid unnecessary harm to others.
In this case, I would hold that Cremers is not entitled to CDA immunity because Cremers actively selected Smith’s e-mail message for publication. Whether Crem-ers’s Museum Security Network is characterized as a “moderated listserv,” an “email newsletter,” or otherwise, it is certain that the Network did not permit users to disseminate information to other users directly without intervening human action. According to Cremers, “To post a response or to provide new information, the subscriber merely replies to the listserv mailing and the message is sent directly to Cremers, who includes it in the listserv with the subsequent distribution.” (emphasis added).
This procedure was followed with respect to Smith’s e-mail message accusing Batzel of owning art stolen by a Nazi ancestor. Smith transmitted the message to one e-mail account, from which Cremers received it. Cremers forwarded the message to a second e-mail account. He pasted the message into a new edition of the Museum Security Network newsletter. He then sent that newsletter to his subscribers and posted it on the Network’s website. Cremers’s decision to select Smith’s e-mail message for publication effectively altered the messages’s meaning, adding to the message the unstated suggestion that Cremers deemed the message worthy of readers’ attention. Cremers therefore did not merely distribute “infor*1041mation provided by another,” and he is not entitled to CDA immunity.
From the record before us, we have no reason to think that Cremers is not well-meaning or that his concerns about stolen artwork are not genuine. Nor on this appeal do we decide whether his communications were defamatory or harmful in fact. We deal only with immunity. And, in my view, there is no immunity under the CDA if Cremers made a discretionary decision to distribute on the Internet defamatory information about another person, without any investigation whatsoever. If Cremers made a mistake, we should not hold that he may escape all accountability just because he made that mistake on the Internet.
I respectfully dissent.
. I join the majority opinion's analysis of our jurisdiction and the opinion’s affirmance of the district court's grant of summary judgment to Mosler. I dissent only from Part III.C of the opinion.
. An "information content provider” is defined as "any person or entity that is responsible, in whole or in part, for the creation or development of information provided through the Internet or any other interactive computer service.” 47 U.S.C. § 230(e)(3).
. This case may be the first to pose the question of whether CDA immunity extends to a user or provider of an Internet newsletter or "listserv.” CDA immunity should not depend, however, on rigid characterizations of particular services. As the amicus explains, there are many different kinds of listservs, each relying on different technology. There also are many kinds of Internet "bulletin boards,” "chat rooms,” "moderated list-servs,” "unmoderated listservs,” and "e-mail newsletters.” Because the contours of these categories are not clear, an approach that determined CDA immunity based on a technology’s classification into one of these categories might cause considerable mischief. Rather than categorical rules, what is needed is an inquiry tailored to each case. CDA immunity should depend not on how a defendant's technology is classified, but on the defendant’s conduct.
. The subjective intent of the initial author, even if knowable, would say little about the propriety of disseminating a libelous communication.
. By “endorsement,” I do not mean that the person who selects information for distribution agrees with the content of that information. Rather, I mean that the person has *1039endorsed the information insofar as he or she has deemed it appropriate for distribution to others. That adds enough to the information to remove it from CDA immunity.
. As long as an interactive computer service permits users to post messages directly in the first instance, the messages are "information provided by another,” and the user or provider is entitled to CDA immunity, even if the provider later removes all or part of the offensive communication. An important purpose of § 230 was to encourage service providers to self-regulate the dissemination of offensive material over their services. Zeran, 129 F.3d at 331. Preserving CDA immunity, even when a service user or provider retains the power to delete offensive communications, ensures that such entities are not punished for regulating themselves.