Litigating Platform Liability in Europe: New Human Rights Case Law in the Real World – Daphne Keller

22 04 2016

internetThis is the third of four posts on the European Court of Human Rights’ (ECHR) rulings in Delfi v. Estonia and MTE v. Hungary. In both cases, national courts held online news portals liable for comments posted by their users – even though the platforms did not know about them.

These rulings effectively required platforms to monitor and delete user comments in order to avoid liability. The first ruling, Delfi, condoned monitoring in a case involving threats and hate speech. The second, MTE, held that a nearly identical outcome in a case involving trade defamation violated free expression guarantees in the European Convention on Human Rights (Convention).

The two rulings are explained in more detail in Post 1. Post 2 discusses their operational impact for intermediaries hosting user-generated content in Europe. This post considers how the two cases may affect current and future litigation.

The real world litigation impact of Delfi and MTE will depend in part on individual countries’ laws and attitudes toward the Human Rights Court. As discussed in Post 1, courts will be able to resolve some cases under national law, without reaching questions about Convention rights. When courts do recognize human rights issues, it’s not always clear how much they care what the ECHR says. (It’s even less clear whether other branches of government will care. Turkey, for example, has been hauled to the ECHR and lost twice on nearly identical claims about Internet content blocking.) But there is a paucity of high level case law on point, so defendant platforms and interested civil society groups will likely seize on the MTE ruling in response to monitoring demands by plaintiffs in European courts.

Cases about “sub-Delfi-level” expression

The key distinction between MTE and Delfi was the kind of “bad” speech at issue: Delfi approved monitoring for hate speech and threats, MTE disapproved monitoring for defamation. Thus, for defendants in cases about sub-Delfi-level tortious expression, such as defamation, MTE should in principle be very helpful. It provides a structuring outer limit, based on fundamental rights, for courts applying vague or outdated laws to Internet services. Even for the stronger Delfi-level comments, the MTE analysis could – and I believe should — carry the day and preclude monitoring requirements for platforms other than news portals.

Of course, this raises the question of which claims are more like Delfi – speech so bad, it justifies the free expression harms of monitoring orders – and which are more like MTE. Where does copyright infringement fall, for example? What about data protection violations? Both claims are rooted in rights that the ECHR has traditionally balanced carefully against free expression, so part of the answer may lie in that case law. Another consideration may be the nature of the lawful speech that will likely fall prey to over-zealous removal.  Are the kinds of expression that might be mistaken for defamation somehow different from, or more valuable than, the kinds that might be mistaken for copyright infringement?  To my mind, evaluating hypothetical, future speech and calculating whether to accept a high risk that it will be improperly deleted is a dangerous exercise.  But courts may see this differently.

News defendants versus other platform defendants

There is some room for debate about how the identity of the MTE and Delfi defendants as journalists plays into the analysis. On its face, the law seems much better for non-news source intermediaries. The Delfi court focused on the defendant’s status as a “professional publisher,” its reasoning drew extensively on cases and considerations specific to journalism. (Par. 129-135). It expressly “emphasise[d] that the present case relates to a large professionally managed Internet news portal run on a commercial basis which published news articles of its own and invited its readers to comment on them,” and that it “does not concern other fora on the Internet where third-party comments can be disseminated, for example an Internet discussion forum or a bulletin board[.]” (Paras 115-117)

Since Delfi’s status as a professional publisher increased its responsibilities, more traditional Internet hosts should logically have fewer duties. MTE, however, suggests a possible counterargument – that defendants’ journalistic role actually helped their defense. The court there noted (as did the Delfi court) that “providing [a] platform for third-parties to exercise their freedom of expression by posting comments is a journalistic activity of a particular nature,” suggesting that the defendants benefitted from their role in journalism. This idea may be reinforced by MTE’s reliance on case law protecting journalists who disseminate statements made by sources. (79)

Northern Ireland’s Facebook/McCloskey case

One place the MTE ruling could be relevant is in a case against Facebook, currently before the Court of Appeal of Northern Ireland.  A lower court there held the platform liable for misuse of private information, based on content posted by a user. It considered a data protection claim, but rejected it on jurisdictional grounds. Facebook had not been notified about the particular posts at issue in the case, though it knew of similar content from the same user.

By holding Facebook responsible for content it didn’t know about, the Northern Irish court effectively imposed a monitoring requirement comparable to those in Delfi and MTE. In theory, the case could be resolved on the same fundamental rights reasoning the ECHR used. There are national and EU law issues to clear up first, though, including Facebook’s argument that the eCommerce Directive precludes such a monitoring requirement.

At least two interesting wrinkles could make the ECHR case law relevant to the case’s outcome, however.  First, the Northern Irish court reasoned that its conclusion was consistent with the eCommerce Directive, because Facebook’s knowledge of similar content in a related case meant it “knew” about these posts as well.  If that analysis were right, it would mean that claimants who tell an intermediary about one piece of tortious content could subsequently impose a monitoring duty, without violating eCommerce Directive Article 15. That in turn raises the question whether this novel legal path toward intermediary policing duties would still be barred by the Convention.  Are the Article 10 concerns of the MTE court somehow lessened if monitoring is prompted by a user complaint instead of background law?  Would the answer be different if, as seems possible in this case, Facebook were expected to monitor only this particular user’s account?

Another major wrinkle in the Facebook case could make MTE and Delfi more directly relevant.  The case includes a data protection claim against the platform based on content posted by a user.  The lower court rejected this claim, but the appellate court might not. If the data protection claim is valid, the next question is whether the eCommerce Directive, including Article 15’s prohibition on general monitoring, applies to that claim. Many practitioners believe that it does not: that data protection-based claims, including “Right to Be Forgotten” removal demands, are carved out of the eCommerce intermediary liability rules. That’s an alarming possibility – that plaintiffs can evade those rules simply by adding a data protection claim to their pleadings, even for cases that would ordinarily be resolved under defamation, traditional privacy tort law, or other areas that clearly fall within the eCommerce framework. If the court accepted this argument and declined to apply the eCommerce Directive, the case could turn on other legal constraints on intermediary liability –in particular, the Article 10 limits that the MTEcourt said are mandatory under the Convention.

Germany’s monitoring cases

Another interesting place to look for post-Delfi litigation fallout will be Germany. Courts there have imposed take down/stay down obligations on a number of Internet intermediaries under the “Stoererhaftung” or “interferer liability” doctrine. In a case involving copyright claims against Rapidshare, the Federal Court of Justice said that such obligations do not conflict with the restriction on general monitoring under Article 15 of the EU’s eCommerce Directive, because Rapidshare only had to monitor for the specific copyrighted works identified by the rightsholder.

Defendants in new German Stoererhaftung cases may invoke MTE to oppose monitoring demands. In copyright or defamation cases, they may argue that the expression at issue is less damaging than the unprotected hate speech in Delfi, and therefore cannot justify the monitoring-based harms to Article 10 rights identified in MTE. Plaintiffs in turn may contend that orders to monitor for precisely specified items of content pose little risk to lawful expression, because a platform is not called on to assess the legality of user speech – only to identify duplicates. I suspect plaintiffs would win that one in Germany. Courts there may also see little connection between their cases and the new ECHR rulings, because they view Stoererhaftung monitoring injunctions as remedies rather than determinations of tort liability.

But the MTE case raises a new and interesting procedural angle for the defense. Stoererhaftung defendants have, so far, been unable to get Court of Justice of the European Union (CJEU) review in the Stoererhaftung cases. That’s because the power to refer cases to the CJEU rests with the national court, not the parties. But the ECHR doesn’t work that way. A party can take a claim based on fundamental rights to the ECHR without court permission, once it has exhausted domestic appeals for its claim. Encouraged by the MTE ruling, platforms may be likelier to try this avenue of review.

Standing to assert Internet users’ rights

A final interesting litigation angle concerns the distinction between the platform’s own Article 10 rights, and those of its users. In intermediary liability litigation, arguments based on users’ rights are typically more compelling and important than those based on the platform’s own rights, including its right to conduct a business. But are platforms allowed to raise them? Do they have standing to assert rights on behalf of users?

Both the Delfi and MTE rulings fudge this distinction at times, mentioning both user and platform rights. But MTE’s analysis seems almost entirely driven by consideration of Internet users’ rights, and defendant platforms clearly emphasized those rights in their pleadings. (paras 36-39, 61, 82, 86, 88.) The defendant in Delfi, by contrast, appears to have raised only the platform’s own “freedom to impart information created and published by third parties.” (P. 61, 73) The Court in Delfi identified the question before it as “whether… holding the applicant company liable for these comments posted by third parties [breached] its freedom to impart information” and stated in its conclusion that strict liability “did not constitute a disproportionate restriction on the applicant company’s right to freedom of expression.” (P. 140, 162, emphasis added).

Setting aside procedural questions about standing, the substantive outcome of MTE means that platform defendants should be able to raise users’ rights in national courts. Otherwise the Article 10 rights identified by the Court would be effectively unprotectable in intermediary liability cases. An interesting twist on the question is whether the ruling affects individual users’ rights to sue for “wrongful removal.” At present, a user whose lawful speech is removed based on a false accusation of illegality would have a hard time getting into court to sue the accuser, unless a government actor instigated the removal. (As for suing the platform, we’ll see what the French Courbet case says, but that seems a much harder claim.) But if users never have standing to sue, the Article 10 rights identified in MTE are truly a right without a remedy.

Up Next

The next and final post will look at how Delfi and MTE could affect policy discussions, in Brussels and elsewhere, regarding potential changes to the law of platform liability.

The first post in this series, New Intermediary Liability from the Court of Human Rights, What will they mean in the real world? was published on 19 April 2016, the second post Policing online comments in Europe: New Human Rights Case in the Real World was published on 21 April 2016.

This post originally appeared on the blog of the Centre for Internet and Society and is reproduced with permission and thanks.


Actions

Information

One response

20 10 2016
Liability and responsibility: new challenges for Internet intermediaries | LSE Media Policy Project

[…] that in making a strict ruling on copyright, wider benefits to society may be lost (consider the Stoererhaftung cases in […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s




%d bloggers like this: