ORDER GRANTING MOTION TO DISMISS
William H. Orrick, United States District JudgeINTRODUCTION
In November 2015, Lloyd “Carl” Fields, Jr. and James Damon Creach were shot and killed while working as United States government contractors at a law enforcement training center in Amman, Jordan. The shooter, Anwar Abu Zaid, was a Jordanian police officer who had been studying at the center. In subsequent statements, the Islamic State of Iraq and Syria (“ISIS”) claimed responsibility for the attack, and according to Israeli intelligence, the gunman belonged to a clandestine ISIS terror cell. In their Second Amended Complaint, plaintiffs, the wife of Fields and the wife and children of Creach, seek to hold defendant Twitter, Inc. (“Twitter”) liable for Abu Zaid’s despicable acts and ISIS’s terrorism under 18 U.S.C, § 2333(a), part of the Anti-Terrorism Act (“ATA”), on the theory that Twitter provided material support to ISIS by allowing ISIS to sign up for Twitter accounts, and that this material support was a proximate cause of the November 2015 shooting.
I dismissed plaintiffs’ First Amended Complaint because their claims were barred by the Communications Decency Act (“CDA”), 47 U.S.C. § 230(c). In the Second Amended Complaint, plaintiffs attempt to plead around the CDA by asserting that Twitter provided ISIS with material support by allowing ISIS members to sign up for accounts, not by allowing them to publish content. But no amount of careful pleading can change the fact that, in substance, plaintiffs aim to hold Twitter liable as a publisher or speaker of ISIS’s hateful rhetoric, and that such liability is barred by the CDA, Twitter’s motion to dismiss is GRANTED without leave to amend.
BACKGROUND
In 2015, Fields and Creach travelled to Jordan through their work as government contractors. Second Amended Complaint ¶¶ 72-73 (“SAC”) (Dkt. No. 48). Both had served as law enforcement officers in the United States, and both were assigned to the International Police Training Center (“IPTC”), a facility in Amman run by the United States Department of State. Id. ¶ 74.
One of the men studying at the IPTC was Anwar Abu Zaid, a Jordanian police captain. Id. ¶ 77, On November 9, 2015, Abu Zaid smuggled an assault rifle and two handguns into the IPTC and shot and killed Fields, Creach, and three other individuals. Id. ¶ 78. ISIS subsequently “claimed responsibility” for the attack, stating,
And on ‘9 November 2015,’ Anwar Abu Zeid—after repenting from his former occupation—attacked the American crusaders and their apostate allies, killing two American crusaders, two Jordanian apostates, and one South African crusader. These are the deeds of those upon the methodology of the revived Khilafah. They will not let its enemies enjoy rest until enemy blood is spilled in revenge for the religion and the Ummah.
Id. ¶ 80.
Plaintiffs do not allege that ISIS recruited or communicated with Abu Zaid over Twitter, that ISIS or Abu Zaid used Twitter to plan, carry out, or raise funds for the attack, or that Abu Zaid ever viewed ISIS-related content on Twitter or even had a Twitter account. There is no connection between Abu Zaid and Twitter alleged in the SAC.
*1119Plaintiffs accuse Twitter of violating 18 U.S.C. § 2833(a), part of the ATA, by knowingly providing material support to ISIS, in violation of 18 U.S.C. § 2339A and 18 U.S.C. § 2339B. SAC ¶¶ 84-87 (Count 1, section 2339A), 88-91 (count 2, section 2339B). Section 2333(a) provides:
Any national of the United States injured in his or her person, property, or business by reason of an act of international terrorism, or his or her estate, survivors, or heirs, may sue therefor in any appropriate district court of the United States and shall recover threefold the damages he or she sustains and the cost of the suit, including attorney’s fees.
18 U.S.C. § 2333(a). Sections 2339A and 2339B prohibit the knowing provision of “material support or resources” for terrorist activities or foreign terrorist organizations. 18 U.S.C. §§ 2339A(a), 2339B(a)(1). The term “material support or resources” is defined to include “any property, tangible or intangible, or service,” including “communications equipment.” 18 U.S.C. §§ 2339A(b)(1), 2339B(g)(4).
Plaintiffs assert that Twitter’s “provision of material support to ISIS was a proximate cause of [their] injuries].” SAC ¶¶ 86, 90. They allege that Twitter “knowingly and recklessly provided ISIS with accounts on its social network” and that “[t]hrough this provision of material support, Twitter enabled ISIS to acquire the resources needed to carry out numerous terrorist attacks” including the attack that took place “on November 9, 2015 when an ISIS operative in Amman, Jordan shot and killed Lloyd ‘Carl’ Fields, Jr. and James Damon Creach.” Id. ¶ 1.
Plaintiffs contend that ISIS uses Twitter “to spread propaganda and incite fear by posting graphic photos and videos of its terrorist feats.” Id. ¶ 58. ISIS also uses Twitter “to raise funds for its terrorist activities,” id. ¶ 30, and to “post instructional guidelines and promotional videos,” id. ¶46.
In addition, ISIS uses Twitter ,as a recruitment platform, “reaching] potential recruits by maintaining accounts on Twitter so that individuals across the globe can reach out to [ISIS] directly.” Id. ¶ 43. “After first contact, potential recruits and ISIS recruiters often communicate via Twitter’s Direct Messaging capabilities.”1 Id. Plaintiffs allege that “[t]hrough its use of Twitter, ISIS has recruited more than 30,000 foreign recruits over the last year.” Id. ¶ 52.
Plaintiffs cite a number of media reports from between 2011 and 2014 concerning ISIS’s use of Twitter and Twitter’s “refusal to take any meaningful action to stop it.” Id. ¶¶ 19-26. They also describe several attempts by members of the public and United States government to persuade Twitter to crack down on ISIS’s use of its services. Id. ¶¶ 27-32. They 'allege that, while Twitter has now" instituted a rule prohibiting threats of violence and the promotion of terrorism, and announced in August 2016 that “it has suspended 235,000 accounts since February for promoting terrorism,” it still permits groups designated by the U.S. government as Foreign Terrorist Organizations to maintain official accounts Id. ¶ 40.
LEGAL STANDARD
Federal Rule of Civil Procedure 8(a)(2) requires a complaint to contain “a short and plain statement of the claim showing that the pleader is entitled to relief,” Fed. R. Civ. P. 8(a)(2), in order to “give the defendant fair notice of what the claim is and the grounds upon which it rests,” Bell *1120Atl. Corp. v. Twombly, 550 U.S. 544, 555, 127 S.Ct. 1955, 167 L.Ed.2d 929 (2007) (internal quotation marks and alterations omitted).
A motion to dismiss for failure to state a claim under Federal Rule of Civil Procedure 12(b)(6) tests the legal sufficiency of a complaint. Navarro v. Block, 250 F.3d 729, 732 (9th Cir. 2001). “Dismissal under Rule 12(b)(6) is appropriate only where the complaint lacks a cognizable legal theory or sufficient facts to support a cognizable legal theory.” Mendiondo v. Centinela Hosp. Med. Ctr., 521 F.3d 1097, 1104 (9th Cir. 2008). While a complaint “need not contain detailed factual allegations” to survive a Rule 12(b)(6) motion, “it must plead enough facts to state a claim to relief that is plausible on its face.” Cousins v. Lockyer, 568 F.3d 1063, 1067-68 (9th Cir. 2009) (internal quotation marks and citations omitted). A claim is facially plausible when it “allows the court to draw the reasonable inference that the defendant is liable for the misconduct alleged.” Ashcroft v. Iqbal, 556 U.S. 662, 678, 129 S.Ct. 1937, 173 L.Ed.2d 868 (2009) (internal quotation marks omitted).
In considering whether a claim satisfies this .standard, the court must “accept factual allegations in the complaint as true and construe the pleadings in the light most favorable to the nonmoving party.” Manzarek v. St. Paul Fire & Marine Ins. Co., 519 F.3d 1025, 1031 (9th Cir. 2008). However, “conclusory allegations of law and unwarranted inferences are insufficient to avoid a Rule 12(b)(6) dismissal.” Cousins, 568 F.3d at 1067 (internal quotation marks omitted). A court may “reject, as implausible, allegations that are too speculative to warrant further factual development.” Dahlia v. Rodriguez, 735 F.3d 1060, 1076 (9th Cir. 2013).
DISCUSSION
In the First Amended Complaint (“FAC”), plaintiffs asserted that Twitter provided material support to ISIS because it had “knowingly permitted ... ISIS to use its social network as a tool for spreading extremist propaganda, raising funds and attracting new recruits.” FAC ¶ 1. I concluded that plaintiffs’ claims were barred by the CDA because they sought to hold Twitter liable as a publisher or speaker of ISIS’s speech. Order Dismissing FAC at 1 (Dkt. No. 47). In the SAC, plaintiffs attempt to avoid the CDA’s protection by reframing their claims—alleging that Twitter provided ISIS with material support, not by permitting it to use the social network, but by furnishing ISIS with accounts in the first place. SAC If 1. Despite this careful repleading, plaintiffs’ claims have not changed in a meaningful way. They seek to hold Twitter liable for allowing ISIS to use its network to spread propaganda and objectionable, destructive content. But for the reasons outlined below, and in my prior order, these claims are barred under the CDA.
Twitter moves to dismiss on multiple grounds, but its principal argument is that plaintiffs’ claims are barred by section 230(c), the “protection for ‘Good Samaritan’ blocking and screening of offensive material” provision of the CDA. 47 U.S.C. § 230(c). Section 230(c) contains two subsections, only the first of which, section 230(c)(1), is relevant here:
(1) Treatment of publisher or speaker No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
47 U.S.C. § 230(c)(1).
While the Ninth Circuit has described the reach of section 230(e)(1) in broad terms, stating that it “immunizes *1121providers of interactive computer services against liability arising from content created by third parties,” the statute does not “create a lawless no-man’s-land on the internet.” Fair Hous. Council of San Fernando Valley v. Roommates.Com, LLC, 521 F.3d 1157, 1162, 1164 (9th Cir. 2008); see also Doe v. Internet Brands, Inc., 824 E.3d 846, 853 (9th Cir. May 31, 2016) (noting that “the CDA does not declare a general immunity from liability deriving from third-party content”) (internal quotation marks omitted). Rather, separated into its elements, section 230(c)(1) protects from liability only (a) a provider or user of an interactive computer service (b) that the plaintiff seeks to treat as a publisher or speaker (c) of information provided by another information content provider. Barnes v. Yahoo!, Inc., 570 F.3d 1096, 1100-01 (9th Cir. 2009).
Plaintiffs do not dispute that Twitter is an interactive computer service provider, or that the offending content highlighted in the SAC was provided by another information content provider. They dispute only the second element of Twitter’s section 230(c)(1) defense, i.e., whether they seek to treat Twitter as a publisher or speaker.
The prototypical cause of action seeking to treat an interactive computer service provider as a publisher or speaker is defamation. See, e.g., Internet Brands, Inc., 824 F.3d at 850-51; Barnes, 570 F.3d at 1101.2 However, “the language of the statute does not limit its application to defamation cases.” Barnes, 570 F.3d at 1101. Courts have applied section 230(c)(1) to a variety of claims, including negligent undertaking, id. at 1102-03, intentional assault, Klayman v. Zuckerberg, 753 F.3d 1354, 1357-60 (D.C. Cir. 2014), and violation of anti-sex-trafficking laws, Jane Doe No. 1 v. Backpage.com, LLC, 817 F.3d 12, 18-24 (1st Cir. 2016). “[W]hat matters is not the name of the cause of action— defamation versus negligence versus intentional infliction of emotional distress— [but] whether the cause of action inherently requires the court to treat the defendant as the publisher or speaker of content provided by another.” Barnes, 570 F.3d at 1101-02 (internal quotation marks omitted). “[C]ourts must ask whether the duty that the plaintiff alleges the defendant violated derives from the defendant’s status or conduct as a publisher or speaker. If it does, section 230(c)(1) precludes liability.” Id. (internal quotation marks omitted).
Twitter contends that, despite plaintiffs’ attempt to reframe their claims, plaintiffs still seek to hold it liable as the publisher of content created by ISIS. Motion to Dismiss (“Mot.”) at 7-8 (Dkt. No. 49). It notes that while the SAC contains “some allegations ... concerning Twitter’s provision of accounts to ISIS,” it still “contains, and necessarily relies on ‘detailed descriptions of ISIS-related messages, images, and videos disseminated through Twitter and the harms allegedly caused by the dissemination of that content.’ ” Mot. at 8 (quoting Order Dismissing FAC at 7-8). Twitter highlights that many of the allegations in the SAC still fault it for failing to detect and prevent ISIS-related content: Twitter “failed to respond to pleas to shut down clear incitements to violence”; Twitter “does not actively monitor and will not censor user content”; “experts agree that Twitter could and should be doing more to stop ISIS from using its social network”; the White House announced they will “press the biggest Internet firms to take a *1122more proactive approach to countering terrorist messages and recruitment online.” SAC ¶¶ 30, 32, 36, 37. According to Twitter, plaintiffs’ claims are based on Twitter’s alleged failure to exclude this third-party content, a quintessential responsibility of a publisher. See Mot. at 8-9; see also Klayman, 753 F.3d at 1359 (“the very essence of publishing is making the decision whether to print or retract a given piece of content”); Doe v. My Space, Inc., 528 F.3d 413, 420 (5th Cir. 2008) (“decisions relating to the monitoring, screening, and deletion of content [are] actions quintessentially related to a publisher’s role”) (internal quotation marks omitted); Roommates, 521 F.3d at 1170-71 (noting that section 230(c)(1) applies to “any activity that can be boiled down to deciding whether to exclude material that third parties seek to post online,” and that “deter-min[ing] whether or not to prevent [the] posting” of material by third parties is “precisely the kind of activity” covered by the statute); Batzel v. Smith, 333 F.3d 1018, 1031 (9th Cir. 2003) (“the exclusion of ‘publisher’ liability necessarily precludes liability for exercising the usual prerogative of publishers to choose among proffered material”). Twitter adds that plaintiffs’ provision of accounts theory does not resolve this issue because “decisions about whether particular third parties may have Twitter accounts” are also publishing activity. Mot. at 9 (quoting Order Dismissing FAC at 10).
Plaintiffs make two main arguments in response. First, they contend that their claims are not content based because “[t]he theory of liability set out in the SAC is based purely on Defendant’s knowing provision of Twitter accounts to ISIS, not content created with those accounts.” Opposition (“Oppo.”) at 1-2. (Dkt. No. 52). They assert that “[t]he decision to provide ISIS with a Twitter account is wholly distinct from permitting ISIS to tweet propaganda. The content-neutral decision about whether to provide someone with a tool is not publishing activity as defined by the Ninth Circuit.” Id. Plaintiffs further explain that
Providing ISIS with a Twitter account is not publishing under [the Ninth Circuit’s] definitions because it does not involve reviewing editing or deciding whether to publish or withdraw tweets. Nor is deciding whether someone can sign up for a Twitter account the same thing as deciding what content can be published; handing someone a tool is not the same thing as supervising their use of that tool. The CDA bars claims based on the latter, but the theory of liability in this case is based solely on the former. And, notably, many Twitter users who sign up for accounts never issue a single tweet. In other words, account creation and content creation on Twitter are two distinct' activities.
Id. at 3.
Plaintiffs also argue that reliance on content to prove causation does turn their non-content based provision of account theory into a CDA barred claim. They insist that “[a]ll of the content-based allegations in the SAC are strictly limited to Section III, titled ‘TWITTER PROXIMATELY CAUSED PLAINTIFFS’ INJURIES’.” Id. at 4. And they urge that “where a theory of liability relies on content purely for purposes of causation, but otherwise does not depend on content as a critical element, the CDA does not apply.” Id.
Second, plaintiffs highlight their allegations regarding Twitter’s Direct Messaging capabilities and assert that “[b]ecause Direct Messages are unpublished private communications, this theory of liability does not seek to treat Defendant as a publisher or speaker and, accordingly, the CDA does not apply.” Id. at 5. Plaintiffs assert that the “ordinary meaning of ‘pub*1123lisher’ is one who disseminates information to the public” and therefore, plaintiffs’ “theory of liability, based on purely private content, is not barred by the CDA.” Id. at 5-6.
I. PROVISION OF ACCOUNTS THEORY
There are at least three problems with plaintiffs’ provision of accounts theory. First, providing accounts to ISIS is publishing activity, just like monitoring, reviewing, and editing content. Courts have repeatedly described publishing activity under section 230(c)(1) as including decisions about what third-party content may be posted online. See, e.g., Klayman, 753 F.3d at 1359 (“the very essence of publishing is making the decision whether to print or retract a given piece of content”); MySpace, 528 F.3d at 420 (“decisions relating to the monitoring, screening, and deletion of content [are] actions quintessentially related to a publisher’s role”); Roommates, 521 F.3d at 1170-71 (“determining] whether or not to prevent [the] posting” of third-party material online is “precisely the kind of activity” covered by the CDA); Batzel, 333 F.3d at 1031 (“the exclusion of ‘publisher’ liability necessarily precludes liability for exercising the usual prerogative of publishers to choose among proffered material”). Plaintiffs’ provision of accounts theory is slightly different, in that it is based on Twitter’s decisions about whether particular third parties may have Twitter accounts, as opposed to what particular third-party content may be posted. Plaintiffs urge that Twitter’s decision to provide ISIS with Twitter accounts is not barred by section 230(c)(1) because a “content-neutral decision about whether to provide someone with a tool is not publishing activity.”
Although plaintiffs assert that the decision to provide an account to or withhold an account from ISIS is “content-neutral,” they offer no explanation for why this is' so and I do not see how this is the case. A policy that selectively prohibits ISIS members from opening accounts would necessarily be content based as Twitter could not possibly identify ISIS members without analyzing some speech, idea or content expressed by the would-be account holder: i.e. “I am associated with ISIS.” The decision to furnish accounts would be content-neutral if Twitter made no attempt to distinguish between users based on content— for example if they prohibited everyone from obtaining an account, or they prohibited every fifth person from obtaining -an account. But plaintiffs do not assert that Twitter should shut down its entire site or impose an arbitrary, content-neutral policy. Instead, they ask Twitter to specifically prohibit ISIS members and affiliates from acquiring accounts—a policy that necessarily targets the content, ideas, and affiliations of particular account holders. There is nothing content-neutral about such a policy.
Further, as Twitter points out, even if a user never posts a tweet, “a user who opens an account necessarily puts content online” as each Twitter account displays a public user name—such as @ISIS_Me-diaJHub, and a user photograph, such as a bearded man’s face. Reply at 4 (Dkt. No. 53). The user name @ISIS_Media_Hub, on its own, expresses a number of ideas: “I am affiliated with ISIS”; “I am a media source”; and “Follow me for information and publicity about ISIS’s activities.” A decision to decline to furnish an account to this user, based on its apparent ISIS affiliation, would be a publishing decision to prohibit the public dissemination of these ideas.
Functionally, plaintiffs urge that Twitter should have imposed a blanket ban on pro-ISIS content by prohibiting ISIS affiliates from opening accounts at all. While the timing and scope of such a censorship poli*1124cy differs from one barring specific objectionable tweets, it is still a content-based policy, and therefore, would constitute publishing activity. Despite being aimed at blocking Twitter accounts instead of particular tweets, plaintiffs’ provision of accounts theory is still based on Twitter’s alleged violation of a “duty ... derive[d] from [its] status or conduct as a publisher.” Barnes, 570 F.3d at 1102.
A recent First Circuit case, Jane Doe No. 1 v. Backpage.com, LLC, 817 F.3d 12 (1st Cir. 2016), adds further support to this conclusion, as the Backpage court held that decisions about the structure and operation of a website are content-based decisions. In Backpage, the plaintiffs, each of whom had been a victim of sex trafficking, sued the defendant website provider under the Trafficking Victims Protection Reau-thorization Act, asserting that the defendant had violated the statute through various “choices [it] ha[d] made about the posting standards for advertisements,” including “the lack of controls on the display of phone numbers, the option to anonymize email addresses, [and the] acceptance of anonymous payments.” Id. at 20. The plaintiffs argued that “these choices are distinguishable from publisher functions.” Id. The First Circuit disagreed, holding that section 230(c)(1) “extends to the formulation of precisely the sort of website policies and practices [the plaintiffs] assail.” Id. The court explained that decisions regarding the “structure and operation of [a] website”—such as “permitting] users to register under multiple screen names” and other decisions regarding “features that are part and parcel of the overall design and operation of the website”— “reflect choices about what content can appear on the website and in what form” and thus “fall within the purview of traditional publisher functions.” Id. at 20-21.
Likewise, Twitter’s decisions to structure and operate itself as a “platform ... allowing] for the freedom of expression [of] hundreds of millions of people around the world,” SAC ¶35, and, through its hands-off policy, allowing ISIS to obtain “dozens of accounts on its social network” Id. ¶ 9, “reflect choices about what [third-party] content can appear on [Twitter] and in what form,” Backpage, 817 F.3d at 21. Where such choices form the basis of a plaintiffs claim, section 230(c)(1) applies.
Second, although plaintiffs have carefully restructured their SAC to focus on their provision of accounts theory of liability, at their core, plaintiffs’ allegations are still that Twitter knowingly failed to prevent ISIS from disseminating content through the Twitter platform, not its mere provision of accounts to ISIS.
In the SAC, plaintiffs have consolidated their allegations about Twitter’s provision of accounts to ISIS under a section titled “TWITTER PROVIDED ACCOUNTS TO ISIS” where they assert, for example, that: “Since 2010, Twitter has provided ISIS with dozens of accounts on its social network”; “Twitter permitted Al-Furqan, ISIS’s official media arm, to maintain an account with 19,000 followers”; “Al-Hayat Media Center, ISIS’s official public relations group, maintained at least a half dozen accounts”; “@ISIS_Media_Hub, had 8,954 followers as of September 2014”; “As of December 2014, ISIS had an estimated 70,000 Twitter accounts, at least 79 of which were ‘official.’ ” SAC ¶¶ 9-13. Plaintiffs characterize these allegations as “based purely on Defendant’s knowing provision of Twitter accounts to ISIS, not content created with those accounts” and contend that their claims against Twitter are therefore not barred by the CDA. Oppo. at 1-2.
As discussed above, the decision to furnish an account, or prohibit a particular user from obtaining an account, is itself publishing activity. Further, while plain*1125tiffs urge me to focus exclusively on those five short paragraphs, I cannot ignore that the majority of the SAC still focuses on ISIS’s objectionable use of Twitter and Twitter’s failure to prevent ISIS from using the site, not its failure to prevent ISIS from obtaining accounts. For example, plaintiffs spend almost nine pages, more than half of the complaint, explaining that “Twitter Knew That ISIS Was Using Its Social Network But Did Nothing”; “ISIS Used Twitter to Recruit New Members”; “ISIS Used Twitter to Fundraise”; and “ISIS Used Twitter To Spread Propaganda.” SAC ¶¶ 19-71 (emphasis added). These sections are riddled with detailed descriptions of ISIS-related messages, images, and videos disseminated through Twitter and the harms allegedly caused by the dissemination of that content.
The SAC also includes a number of allegations specifically faulting Twitter for failing to detect and prevent the dissemination of ISIS-related content through the Twitter platform.' See, e.g., id. ¶¶ 30 (Twitter “failed to respond to pleas to shut down clear incitements to violence”), 36 (Twitter “does not actively monitor and will not censor user content”).
It is no surprise that plaintiffs have struggled to excise their content-based allegations; their claims are inherently tied up with ISIS’s objectionable use of Twitter, not its mere acquisition of accounts. Though plaintiffs allege that Twitter should not have provided accounts to ISIS, the unspoken end to that allegation is the rationale behind it: namely, that Twitter should not have provided accounts to ISIS because ISIS would and has used those accounts to post objectionable content.
In short, the theory of liability alleged in the SAC is not that Twitter provides material support to ISIS by providing it with Twitter accounts, but that Twitter does so by allowing ISIS to use Twitter “to send its propaganda and messaging out to the world and to draw in people vulnerable to radicalization.” SAC ¶ 41. Plaintiffs do not dispute that this theory seeks to treat Twitter as a publisher and is barred by section 230(c)(1). Oppo. at 3.
Acknowledging that the SAC contains substantial references to ISIS content on Twitter, plaintiffs nevertheless argue that their claims are not barred by the CDA because “[a]ll of the content-based allegations in the SAC are strictly limited to Section III, titled “TWITTER PROXIMATELY CAUSED PLAINTIFFS’ INJURIES,” and the mere fact that a claim relies on content to demonstrate proximate cause does not make the underlying claim content-based. They point to Barnes and Internet Brands in support of their contention that a claim that relies on third-party content to demonstrate proximate cause is not necessarily barred by the CDA. While this may be true, those eases, which involve substantially different facts than those at issue here, only highlight that plaintiffs’ theory of liability is not content-neutral and cannot survive.
Plaintiffs attempt to liken the provision of accounts theory to the promissory es-toppel claim raised in Barnes, but the facts of that case are substantially different. The plaintiff in Barnes sued Yahoo alleging that she had relied on its promise that it would remove explicit photographs her ex-boyfriend had posted online without her consent, but that it had failed to fulfill this promise. 570 F.3d at 1098-99. The Ninth Circuit found that this promissory estoppel claim was not precluded by section 230(c)(1) because the plaintiff did not “seek to hold Yahoo liable as a publisher or speaker of third-party content, but rather as the counterparty to a contract, as a promisor who has breached.” Id. at 1107. In other words, the plaintiffs theory of liability was based not on Yahoo’s “publishing conduct,” but rather on its “mani*1126fest intention to be legally obligated to do something.” Id. By contrast, plaintiffs here assert no theory based on contract liability and allege no promise made or breached by Twitter. Barnes does not indicate that the conduct underlying the provision of accounts theory is beyond the scope of publishing conduct.
Plaintiffs also rely on Internet Brands, but again, that case involves a substantially different set of facts from this one. The plaintiff there sued the defendant website operator for negligent failure to warn, alleging that the defendant knowingly failed to warn her that two individuals were using the website to identify and lure rape victims. 824 F.3d at 848-50. Although the plaintiff had posted information on the website, the two individuals had not. Id. In holding that the plaintiff did not seek to hold the defendant liable as a publisher of third-party content, the Ninth Circuit emphasized that her negligent failure to warn claim
would not require [the defendant] to remove any user content or otherwise affect how it publishes or monitors such content ... Any alleged obligation to warn could have been satisfied without changes to the content posted by the website’s users and without conducting a detailed investigation. [The defendant] could have given a warning to ... users, perhaps by posting a notice on the website or by informing users by email what it knew about the activities of [the individuals]. Posting or emailing such a warning could be deemed an act of publishing information, but section 230(c)(1) bars only liability that treats a website as a. publisher or speaker of content provided by somebody else ... A post or email warning that [the defendant] generated would involve only content that [the defendant] itself produced.
Id. 824 F.3d at 851. Plaintiffs’ provision of accounts theory, on the other hand, has nothing to do with information Twitter itself should have posted online. Moreover, it would significantly affect Twitter’s monitoring and publication of third-party content by effectively requiring Twitter to police and restrict its provision of Twitter accounts. Internet Brands, like Barnes, does not help plaintiffs’ case.
Plaintiffs’ reliance on Barnes and Internet Brands fails because, unlike those cases, and regardless of how plaintiffs have organized their complaint, their theory of liability is inherently tied to content. Creative use of headings does not change the nature of their claims, and plaintiffs cannot avoid section 230(c)(1) immunity by segregating their content-based allegations under a proximate cause banner. In the SAC, plaintiffs still seek to hold Twitter liable for allowing ISIS to post propaganda and other objectionable content on its site. This is fundamental publishing activity and falls under section 230(c)(1). Plaintiffs’ attempt to limit their content-based allegations to then proximate cause section does not save their provision of accounts theory.
The third problem with the provision of accounts theory is that plaintiffs have not adequately alleged causation. Although the parties dispute the exact formulation of the appropriate causal test for civil liability under the ATA, they agree that the statute requires a showing of proximate causation. See Reply at 9-10; Oppo. at 10-11; see also 18 U.S.C. § 2333(a) 18 U.S.C. § 2339(a) (establishing that an individual provides material support by providing resources “knowing or intending that they are to be used in preparation for, or in carrying out, a violation of’ the ATA)(emphasis added); (authorizing a suit for damages by “[a]ny national of the United States injured ... by reason of an act of international terrorism”) (emphasis added); In re Terrorist Attacks on Sept. 11, 2001, 714 F.3d 118, 123-25 (2d Cir. 2013) *1127(affirming a Rule 12(b)(6) dismissal of ATA claims for failure to plausibly allege proximate causation); Rothstein v. UBS AG, 708 F.3d 82, 94-98 (2d Cir. 2013) (same).
Even under plaintiffs’ proposed “substantial factor” test, see Oppo. at 11, the allegations in the SAC do not support a plausible inference of proximate causation between Twitter’s provision of accounts to ISIS and the deaths of Fields and Creach. Plaintiffs allege no connection between the shooter, Abu Zaid, and Twitter. There are no facts indicating that Abu Zaid’s attack was in any way impacted, helped by, or the result of ISIS’s presence on the social network. Instead they insist they have adequately pleaded proximate causation because they have alleged “(1) that Twitter provided fungible material support to ISIS, and (2) that ISIS was responsible for the attack in which Lloyd Fields, Jr. and James Damon Creach were killed.” Id. at 13. Under such an expansive proximate cause theory, any plaintiff could hold Twitter liable for any ISIS-related injury without alleging any connection between a particular terrorist act and Twitter’s provision of accounts. And, since plaintiffs allege that Twitter has already provided ISIS with material support, Twitter’s liability would theoretically persist indefinitely and attach to any and all future ISIS attacks. Such a standard cannot be and is not the law.
As Twitter points out, courts have rejected similarly expansive proximate cause theories under the ATA. For example, In re Terrorist Attacks on Sept. 11, 2001, the Second Circuit rejected the allegation that “providing routine banking services to organizations and individuals said to be affiliated with al Qaeda—as alleged by plaintiffs-—proximately caused the September 11, 2001 attacks or plaintiffs’ injuries.” 714 F.3d at 124. It similarly found that the plaintiffs in Rothstein v. UBS had not demonstrated proximate cause by simply alleging that “they were injured after UBS violated federal law” and noted that such a standard “would mean that any provider of U.S. currency tó a state sponsor of terrorism would be strictly liable for injuries subsequently caused by a terrorist organization associated with that state.” Rothstein, 708 F.3d at 96.
Plaintiffs have not alleged any facts linking Twitter’s provision of accounts to ISIS to Abu Zaid’s attack. Instead they assert that they have demonstrated proximate cause by alleging that Twitter provided ISIS with material support in the form of a powerful communication tool and that ISIS has claimed responsibility for Abu Zaid’s actions. These allegations do not plausibly suggest the necessary causal connection between Twitter’s provision of accounts and the attack that, killed Lloyd Fields, Jr. and James Damon Creach.3
For these three reasons, Plaintiffs’ claims based on the provision of accounts theory are DISMISSED.
II. DIRECT MESSAGING THEORY
Plaintiffs’ other attempt to evade section 230(c)(1) is based on Twitter’s Direct Messaging capabilities, which allow for the sending of private messagés through the Twitter platform. Oppo. at 5-8. Plaintiffs contend that a “publisher” under section 230(e)(1) is “one who disseminates information to the public” and thus *1128“the CDA does not apply to claims based on purely private communications, including claims based on ISIS’s use of Twitter’s direct messages.” Id. at 7. In support of their Direct Messaging theory, plaintiffs abandon all pretense of a content-less basis for liability and assert instead that ISIS has “used [Direct Messaging] to its great advantage,” specifically, to contact and communicate with potential recruits. Id. at 5; see also SAC ¶¶ 43-45.
Publishing activity under section 230(c)(1) extends to Twitter’s Direct Messaging capabilities. As noted earlier, Congress enacted section 230(c)(1) in part to respond to a New York state court decision finding that an internet service provider could be held liable for defamation based on third-party content posted on its message boards. See, e.g., Roommates, 521 F.3d at 1163. In defamation law, the term “publication” means “communication [of the defamatory matter] intentionally or by a negligent act to one other than the person defamed.” Barnes, 570 F.3d at 1104 (internal quotation marks omitted). The Fourth Circuit has held that an internet service provider covered by the “traditional definition” of publisher is protected by section 230(c)(1). Zeran v. Am. Online, Inc., 129 F.3d 327, 332 (4th Cir. 1997) (explaining that “every repetition of a defamatory statement is considered a publication,” and “every one who takes part in the publication is charged with publication”) (internal quotation marks and alterations omitted).
While plaintiffs insist that reference to the legislative history is improper and that “publisher” must be given its normal, ordinary meaning,—“to make public”—the Ninth Circuit has already indicated that section 230(c)(1) extends at least as far as prohibiting internet service providers from being treated as publishers for the purposes of defamation liability. Barnes, 570 F.3d at 1101 (the language of the statute does not limit its application to defamation cases Xemphasis added). In response to this, plaintiffs assert that “even if Congress had intended that the defamation definition of ‘publisher’ be applied in defamation cases, it makes no sense to apply that definition outside of the context of defamation claims.” Oppo. at 8. But this is contradicted by the Ninth Circuit’s statement that “sections 230(e)(1) precludes courts from treating internet service providers as publishers not just for the purposes of defamation law ... but in general.” Id. Further, if the goal of the CDA is to promote unfettered and unregulated free speech on the internet, there is no reason that section 230(c)(1) should shield providers of private messaging services from defamation liability, but no other civil liability. Treating service providers as publishers in non-defamation cases would undermine the protections afforded for defamation claims by forcing providers to implement general, content-restricting policies to filter and block objectionable content that might lead to non-defamation liability. Under this analysis, the private nature of Direct Messaging does not remove the transmission of such messages from the scope of publishing activity under section 230(c)(1).
In addition, a number of courts have applied the CDA to bar claims predicated on a defendant’s transmission of nonpublic messages, and have done so without questioning whether the CDA applies in such circumstances. See Hung Tan Phan v. Lang Van Pham, 182 Cal.App.4th 323, 324-28, 105 Cal.Rptr.3d 791 (2010); Delfino v. Agilent Techs., Inc., 145 Cal.App.4th 790, 795-96, 804-08, 52 Cal.Rptr.3d 376 (2006); Beyond Sys., Inc. v. Keynetics, Inc., 422 F.Supp.2d 523, 528, 536-37 (D. Md. 2006).
Apart from the private nature of Direct Messaging, plaintiffs identify no other way *1129in which their Direct Messaging theory seeks to treat Twitter as anything other than a publisher of information provided by another information content provider. Accordingly, plaintiffs’ claims based on this theory are DISMISSED.
III. PLAINTIFFS’ PUBLIC POLICY THEORY
Finally, plaintiffs argue that barring their claims “would be at odds with the purported goals of the CDA ... ‘[to] encourage the unfettered and unregulated development of free speech on the Internet’” because “Congress surely did not intend to promote speech that aids designated terrorist organizations.” Oppo. at 8 (quoting Batzel, 333 F.3d at 1027). But if Congress intended such a carve out to the CDA, it would have included one. I am not at liberty to create it.
Moreover, plaintiffs’ argument is incoherent. If the goal of the CDA is to “encourage the unfettered and unregulated development of free speech,” any policy that requires interactive computer service providers to remove or filter particular content undermines this purpose. Such a policy would require companies like Twitter to institute (1) expensive and likely imperfect content-specific controls or (2) broad content neutral restrictions that suppress content across the board. If content-specific controls are expensive enough to institute, and the penalties for failure to adequately control objectionable content are sufficiently severe, companies like Twitter will be encouraged to reduce their services or stop offering them all together. There is no debate that pro—ISIS content is objectionable but that does not mean that a carve out is consistent with the CDA’s purpose. The exact opposite is true. Shielding interactive computer service providers from publisher liability for all content encourages these companies to create “platform[s] ... allowing] for the freedom of expression [of] hundreds of millions of people around the world,” SAC ¶ 35, just as the CDA intended.
Plaintiffs acknowledge that allowing their claims might have a minor “ ‘chilling effect’ on Internet free speech,” but insist that, at most, allowing these claims “would deter interactive computer services from knowingly providing material support to terrorists.” Id. This oversimplifies the obligation plaintiffs seek to impose on Twitter. Twitter provides its services to the public indiscriminately. It does not actively provide material support to terrorists. A policy holding Twitter liable for allowing ISIS to use its services would require it to institute new procedures and policies for screening and vetting accounts before they are opened; identify and suspend the accounts of users posting pro-ISIS content; and even identify and suspend the accounts of users promoting terrorism through the direct messaging feature. These are not minor obligations, as they would require Twitter to fundamentally change certain aspects of its services and overturn its hands-off content-neutral approach. That plaintiffs seek to hold Twitter hable for allowing only one type of content—content that supports terrorism— does not make this a minor exception to the CDA’s protections. Requiring Twitter to make any content-based publishing decisions would require them to create and implement filtering procedures that they do not currently have. The difference between treating them as a publisher for one type of content, as compared to no content, is substantial.
Congress, not the courts, has the authority to amend the CDA Plaintiffs’ policy arguments do not justify allowing their claims to proceed.
CONCLUSION
Twitter’s motion to dismiss the SAC is GRANTED WITHOUT LEAVE TO *1130AMEND. I granted leave to amend the FAC, but the restructured SAC suffers from the same fatal infirmities as the FAC. I confirmed with plaintiffs’ counsel at the hearing that they did not seek further amendment, and I have concluded that further amendment would be futile.
IT IS SO ORDERED.
. Twitter’s Direct Messaging capabilities allow Twitter users to communicate privately through messages that can be seen only by the people included on them. SAC ¶ 43.
. Congress enacted section 230(c)(1) in part to respond to a New York state court decision, Stratton Oakmont, Inc. v. Prodigy Servs. Co., 1995 WL 323710 (N.Y. Sup. Ct. May 24, 1995), finding that an internet service provider could be held liable for defamation based on third-party content posted on its message boards. See Internet Brands, 824 F.3d at 851—52;Barnes, 570 F.3d at 1101; Roommates, 521 F.3d at 1163.
. While courts have not required plaintiffs bringing ATA claims based on material support theories to "trace specific dollars to specific attacks,” Strauss v. Credit Lyonnais, S.A., 925 F.Supp.2d 414, 433 (E.D.N.Y. 2013); accord Linde v. Arab Bank, PLC, 97 F.Supp.3d 287, 328 (E.D.N.Y. 2015) (on appeal), they have nevertheless rejected alleged causal connections that are too speculative or attenuated to raise a plausible inference of ’proximate causation, Terrorist Attacks, 714 F.3d at 123-25; Rothstein, 708 F.3d at 94-98.