man dressed as Judge Dredd with a blonde woman saluting behind him dressed as Anderson, at San Diego ComicCon

Are sites ever legally responsible for what is posted in the comments?

We interviewed Jeff Hermes, Deputy Director of the Media Law Resource Center in New York, about news websites and the protections and potential liabilities that come with hosting a community.

Your tl;dr is this: if your site is based in the U.S., you’re protected from being legally liable for what commenters write, even if you filter or pre-moderate your comments, as long as:

  • you don’t change their content to change its intent or insert illegal content
  • you don’t encourage people to break the law with their comments
  • the commenter isn’t one of your employees or, in some cases, freelancers

There may also be exceptions regarding comments that violate intellectual property or federal criminal laws. If you have assets or people based overseas, other laws will likely apply.

However, none of this prevents you from being sued – only from liability if you are. Read below to learn more about all of these issues and more.

The law that protects US news websites from losing a lawsuit over the content of their comment spaces is section 230 of the Communications Decency Act, also known as CDA 230. Can you briefly outline what it says?

Section 230 is a federal statute, with two main provisions.

The first states, “No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” The practical upshot of this provision is that “interactive computer services” — including websites, social media platforms, and a variety of other digital services — cannot be held liable for the consequences of third-party content.

This includes immunity from a wide variety of content-based legal claims, including not only traditional media-related claims like defamation and invasion of privacy, but also less obvious claims like negligence and other tort claims. This immunity applies regardless of whether a website is aware that their users’ activity is potentially (or actually) illegal, and there is no affirmative obligation to monitor content or to remove such material upon request.

The second key provision comes up less often in court, but is no less important than the first. That provision states, “No provider or user of an interactive computer service shall be held liable on account of any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected.”

In other words, good faith moderation of user-submitted content to block offensive material cannot form the basis of legal liability. Another related clause immunizes from liability “any action taken to enable or make available to information content providers or others the technical means to restrict access to [objectionable] material” — so providing “report” or “block” tools to a user community is also protected.

Taken together the two provisions mean that while there is no duty to monitor for or to remove unlawful third-party material, any good faith efforts that a website makes in that regard cannot subject the site to legal liability. There are exceptions, as I’ll discuss below, but the protection is broad and strong.

Two caveats are worth noting up front, however.

First, Section 230 does not immunize a website against the conduct of its own employees. It is generally the case that merely paying third parties to create content does not by itself create an employment relationship, but a website should look carefully at such relationships in case other factors might cause a third party to be considered an employee under state law. In particular, a news organization should consult with counsel before assuming that Section 230 will cover contributions by a freelancer.

Second, it is possible to waive Section 230 protection. While a site might not have an automatic legal obligation to remove unlawful content, it can create such an obligation for itself by promising to remove particular offensive material. If someone relies on that promise and the site fails to follow through, there might be a basis for a legal claim. This is why most websites’ terms of service reserve the right to remove offensive material but do not promise that they will actually do so.

Does it still apply if a news site does some filtering/moderation of the comment space, either by humans or algorithms?

Absolutely. As noted above, Section 230 explicitly protects efforts to moderate offensive content. Moreover, ordinary moderation and editing for other purposes (e.g., to keep discussions on topic or to improve the presentation of user content) will not void liability. While a small handful of court decisions have found a potential issue where a website’s editing was not neutral or impartial, the overwhelming weight of judicial rulings involving Section 230 suggests that a website need not be unbiased in the perspective that they present.

That being said, websites should be careful not to transform themselves into “co-developers” of unlawful aspects of a user’s content. This can potentially happen if a site:

  • interjects its own unlawful content into material submitted by a user;
  • modifies a user’s submission to create an effect that was not there previously (e.g., by deleting the word “not” from the sentence “John is not a thief”);
  • presents user content in a manner that conveys a different meaning than the user’s submission standing alone (e.g., by placing the user’s submission under a header or in groups with other material in a way that creates a new and defamatory meaning); or
  • encourages or requires users to post unlawful content (but note that inviting users to post content that is merely offensive or even potentially damaging is fine, so long as the site is not actually inviting users to violate the law).

Basically, so long as a website does not alter a user’s original message, keeps its own commentary distinct, and does not urge users to submit defamatory or otherwise unlawful material, Section 230 will protect the site.

Website staff can interact freely with users on discussion forums, so long as, again, they do not urge users to submit defamatory or otherwise unlawful comments. Automatic moderation is also permissible, although it is important to keep tabs on algorithmic solutions to make sure they are not generating unexpected results.

Is there any behavior by users that it doesn’t cover?

Yes. There are two primary exceptions to Section 230’s protection: it does not cover violations of intellectual property law, and it does not cover violations of federal criminal law.

The intellectual property exception means that Section 230 does not protect a website against federal copyright or trademark claims arising out of user content. Depending on your jurisdiction, it might also mean that Section 230 does not protect against state law intellectual property claims, such as state-level trademark, right of publicity, or trade secret claims.

Helpfully for companies in Silicon Valley, the U.S. Court of Appeals for the Ninth Circuit (the federal appellate court with jurisdiction over Alaska, Arizona, California, Hawaii, Idaho, Montana, Nevada, Oregon, and Washington) has held that the exception applies only to federal (not state) intellectual property claims; however, other jurisdictions within the U.S. may differ.

A different statute, the Digital Millennium Copyright Act, governs the liability of online intermediaries for user-supplied content that allegedly violates copyright. The DMCA implements a “notice and takedown” scheme for such content, but the requirements of the law are complex and there are several hoops that a website must jump through to qualify for protection. Site operators should consider seeking legal advice about how to comply with these requirements.

There are, unfortunately, no comparable statutes covering other forms of intellectual property. However, for many kinds of intellectual property other than copyright, a plaintiff must prove that a site either intended to infringe or knew that content was infringing. This allows sites to limit liability by removing content upon notice; for example, eBay has been successful at avoiding liability for trademark infringement related to counterfeit goods by instituting an aggressive program to remove listings as soon as the site becomes aware that there might be a problem.

As regards Federal Criminal Law, there are a wide variety of federal criminal laws that might affect online content, including laws regulating: child pornography; providing material support to terrorist organizations; conveying threats of violence; advertising relating to sex trafficking; and many other issues.

Section 230 does not protect a website in the event that this kind of content shows up in user content, and sites becoming aware of such material should remove it immediately. Other obligations might also apply; for example, a site operator is required by another federal law (18 U.S.C. § 2258A) to report instances of child pornography that it discovers to the National Center for Missing and Exploited Children.

Fortunately, as with many kinds of intellectual property claims, federal criminal laws generally will not apply to websites unless they have actual knowledge of the illegal content and do nothing about it. Nevertheless, sites should take alleged violations seriously.

Does it also protect US publishers from being sued by people outside of the US?

This question raises an important point you might not have intended: Section 230 doesn’t, unfortunately, prevent any lawsuit being filed regardless of where the plaintiff is from. Instead, it protects against liability in the event of suit. That being the case, it’s important for sites to think about how they will pay their defense costs even if they are eventually likely to win the case; insurance policies covering content-related claims are the most common solution.

With respect to liability in lawsuits filed by non-U.S. citizens, Section 230 is fully effective in lawsuits filed in U.S. courts. Moreover, another statute, 42 U.S.C. § 4102(c), prohibits reciprocal enforcement by U.S. courts of foreign judgments that are inconsistent with Section 230. Therefore, a foreign plaintiff is unlikely to be able to reach assets located in the United States on the basis of a foreign judgment.

However, Section 230 does not apply outside the United States. Therefore, if a site operator has assets or personnel outside of the United States, or intends to expand into or travel to another country, they might need to consider their responsibility for user content under international law.

Do any other countries that you know of have similar laws to CDA 230?

No other country goes as far as the U.S. does to insulate online intermediaries against liability for user-generated content. When other countries have laws on this topic, they are generally “notice and takedown” regimes, where the intermediary can avoid liability by expeditiously removing the content at issue. The person who originally posted the material might or might not have a legal channel for challenging the removal.

What do you think are the biggest legal issues that news websites might face from hosting comments or other user contributions?

There are several potential issues:

  • Liability for user content.

As discussed above, there are both exceptions to Section 230 and ways to lose Section 230 protection. While Section 230 protects the vast majority of interaction with user-generated content, it is important to be aware of the statute’s limitations in case an unusual situation arises.

  • Indirect impact of comments in defamation cases.

When a plaintiff sues a news organization for defamation, one of the key elements of his/her case will be establishing the damaging impact of the article or broadcast at issue. Intemperate reader/viewer comments are fantastic evidence to present to a jury of how badly the plaintiff’s reputation has been injured. There is, of course, a serious question whether such comments really say much of anything about the plaintiff’s standing in the world at large, but a horrible comment can have an outsized impact on an angry juror.

  • Getting caught in the cross-fire.

Even if a news site or other digital platform isn’t the direct target of a lawsuit, it can find itself embroiled in legal proceedings against an individual user. This is particularly true where the user is anonymous, because a website can find itself subpoenaed to turn over the user’s identity. The site must then decide whether and how to resist those efforts. 

This situation can be particularly onerous where a government investigation is involved, such as happened to Reason.com earlier this year when the Department of Justice interpreted some of its readers’ comments as potential threats to the life of a federal judge.

What else do you think websites should consider when allowing and using user-generated contributions/comments?

In the United States, at least, there is substantial legal protection for those who attempt to moderate or curate user commentary to present a pleasant online experience.

There is nothing that requires your comment section to be a sewer, and in fact a completely hands-off approach may not even be the safest approach. There is always the possibility that user content, if left to run rampant, might cross a serious line — such as violations of federal criminal law — and drag the website into a legal thicket.

This article has been lightly edited. It was conducted over email, and does not constitute legal advice. You can read more about Section 230 here.

Photo by Pat Loika, CC-BY.

Click here to discuss this piece within The Coral Project Community.