Notice and Takedown under the GDPR: an operational overview – Daphne Keller

12 11 2015

EU KeyThis is one of a series of posts about the pending EU General Data Protection Regulation (GDPR), and its consequences for intermediaries and user speech online. In an earlier introduction and FAQ, I discussed the GDPR’s impact on both data protection law and Internet intermediary liability law. Developments culminating in the GDPR have put these two very different fields on a collision course – but they lack a common vocabulary and are in many cases animated by different goals.  Laws addressing concerns in either field without consideration for the concerns of the other can do real harm to users’ rights to privacy, freedom of expression, and freedom to access information online.

This is the third post in a series analyzing the EU’s pending General Data Protection Regulation (GDPR).  The previous post reviewed high-level problems with the GDPR’s process for erasing content posted online by Internet users.  The process disproportionately burdens the rights of Internet users to seek and impart information online.  Those rights could be much better protected, without sacrificing remedies for people whose privacy has been violated, if the GDPR treated erasure of user-generated content separately from erasure of data collected directly by companies based on user behavior and used for back-end processes such as profiling.  The GDPR could then apply standard, well-tested procedural checks to limit erroneous or bad faith removals of lawful user-generated content.

This post goes into more detail about the Regulation’s exact language and the removal process.  It will walk through each step an intermediary would follow to erase user-generated content based on the GDPR’s Right to Be Forgotten provisions.

For the person requesting content removal on the basis of privacy or data protection rights, the removal process will be something of a black box – though increasing calls for transparency could change that somewhat in practice.  From the perspective of the person whose speech rights are affected, it’s an even blacker box.  In many cases, the speakers may not know that their content has been challenged and taken down; if they notice that it’s gone, they won’t know why.  From the perspective of the people seeking information online, the process is entirely opaque.   They’ll never know what they’re missing.

From the intermediary’s perspective, the process is an operational challenge, requiring an ongoing investment of time, personnel, legal analysis and engineering work.  Not all intermediaries will choose to make that investment, or to go through the process described here. Financial incentives for companies to simply honor all removal requests, or to err on the side of removal in case of doubt, are extremely strong.   An intermediary risks sanctions of up to 0.5% — or 2%, or even 5%, depending which draft provision you read — of its annual global turnover every time it chooses to keep user content online.  (Art 79)  Those are dangerously high figures for any company, and particularly for intermediaries handling multiple takedown requests.  As a result, the GDPR will likely lead to the frequent erasure of lawful, free expression by Internet users.

The GDPR draft provisions cited here all appear in the European Data Supervisor’s comparison document.

  1. The Removal Request Comes In

In practice and under many intermediary liability laws or model rules, an intermediary may receive an initial removal request but not have enough information to evaluate it until later, when the requester sends more information.  The GDPR does not clearly distinguish between those two stages, which creates problems I describe below.

a. The Initial Communication Arrives

The first thing that happens is that the intermediary gets a removal request, asserting that content put online by another Internet user violates the requester’s rights.  If it does not have a well-designed intake form for requests (because it’s a small company, for example), or if the requester did not use the form, further communication will often be necessary for clarification.  Such back and forth is common, because removal requests often inadvertently omit key information like the location of the offending content or the legal right it is said to violate.

This part of the process would be greatly improved for both the requester and the intermediary if they could rely on existing intermediary liability law or best practices regarding the information that must be included in removal requests. Those rules tell intermediaries, as a procedural matter, when to proceed to the request evaluation stage; and they ensure the intermediary has the information it needs when that stage comes.  For people requesting removal, clear rules help them submit an actionable request on the first try, and tell them when the ball is in the intermediary’s court to respond.

The GDPR could help requesters and companies by implementing form-of-request requirements modeled on the DMCA, the Manila Principles, or even existing national-law guidelines for complaints to  DPAs. These need not be onerous.  Guidelines commonly call for the requester to provide things like her contact information, the exact location of the content, and the legal basis for removal.  For Right to Be Forgotten requests, Microsoft’s Bing removal request form suggests a useful additional element: the requester must explain any public role she has in her community.

The GDPR allows the intermediary to ask the requester for ID at this stage, if there is a reasonable doubt as to her identity.  (Art 10.2, 12.4a, Council draft).   Intermediaries can also reject requests that are “manifestly unfounded or excessive”; by doing so they assume the burden of proof for that conclusion.  (Art. 12.4, Council draft)

b. The Requester Provides Any Additional Information Needed for the Intermediary to Evaluate Her Claim

At some point, with or without further communication with the requester, the intermediary acquires enough information to make a judgment about honoring the removal request.  (Or, is presumed by law to have enough information.)  You could think of this as the point when the request becomes procedurally ripe or valid, in the same way that a court pleading becomes procedurally valid by meeting legal filing formalities.  Once it is reached, the intermediary can turn to the substantive legal claim being asserted.

The GDPR requires a one-month turnaround time for most removal requests.  Hopefully this only begins once the request is procedurally ripe, and evaluation is possible.  The GDPR could be clear though – the one-month clock should not start ticking from the minute the first communication comes through the door, unless it provides necessary, specified information.  (Art. 12.2)

  1. The Intermediary Restricts Public Access to the Disputed Content

Now, the first really unusual thing happens: in most cases, the intermediary must take the challenged content offline immediately, before weighing the public interest and perhaps before even looking at the content.   The GDPR calls this “restriction” of processing.  The restriction provisions have changed language and location from draft to draft, and are difficult to parse. But they appear to mean that intermediaries take challenged content offline first, and ask questions later, subject to some unclear exceptions.  They may even mean that the intermediary must temporarily remove content as soon as the initial complaint identifies the content’s location, even before the requester clarifies the basis for the legal claim.  To my knowledge, this is unprecedented.  No other intermediary liability system gives one user this kind of instantaneous veto power over another user’s expression.

The rest of this subsection will parse the GDPR’s dense legislative language about restriction of content.  Readers who don’t like that kind of thing should probably skip ahead to Step 3.  The basic overview is this:  (a) most provisions clearly say that “restricted” content must be rendered publicly inaccessible; (b) almost any removal request to an intermediary can trigger the restriction obligation; and (c) exceptions to automatic takedown exist, but they aren’t very clear or meaningful.  Minor amendments could, and should, clarify these exceptions to solve the problem I identify here.

a. What does it mean to restrict content?

The GDPR says that restricting content means making it inaccessible to the public.  As the Parliament draft explains, restricted data is no longer “subject to the normal data access and processing operations and cannot be changed anymore” – including, presumably, by the person who uploaded it.  (Parl. Art. 17(4))  The Council draft provides that restricted data:

may, with the exception of storage, only be processed with the data subject’s consent or for the establishment, exercise or defence of legal claimsor for the protection of the rights of another natural or legal person or for reasons of important public interest. (Art. 17a(3), see also EDPS draft Art. 19a)

In other words, restricted data is kept in storage and not otherwise processed unless an exception applies.

The Council draft definition of “restriction of processing” introduces the only ambiguous note.  It says restriction is “the marking of stored personal data with the aim of limiting their processing in the future.”  (Art. 4(3a))  For intermediaries, arguably this could mean “marking” back-end copies of user-generated content, but not restricting normal public access.  That’d be odd and inefficient as a technical matter, but at least it wouldn’t burden anyone’s speech and information rights in advance of knowing whether the takedown request is valid.  It’s not likely to be what is meant, though, because the same draft, from the Council, includes the language I quoted above about suspending “normal data access.”

More likely, this anomalous definition just reflects the GDPR drafters’ focus on back-end stored user data, rather than on public-facing online content.  A good revision to the GDPR could track exactly this distinction.  By expressly excluding user-generated content from the restriction provisions, drafters could avoid significant problems that the restriction provisions create for online expression and information rights.

b. What kinds of requests trigger content restriction?

In theory, not all requests should trigger content restriction.  The GDPR says restriction is only for processing and requests that are predicated on specific, enumerated legal grounds.  (Parl. Art. 17a(3); Art. 6)  Those grounds may effectively cover all processing of user-generated content by Intermediaries, though.

One listed basis for restriction is when content’s “accuracy is contested by the data subject, for a period enabling the controller to verify the accuracy of the data.”  (Council 17a(1)(a)); Parl. Art 17.4(a))  In other words, claims that would once have sounded in defamation, and been subject to well-developed defenses, now lead to immediate suspension of content.  The content can be reinstated when the controller “verif[ies] the accuracy of the data” — generally meaning never, because finding the truth behind real-world disputes is not what intermediaries do well.  Interestingly, the problems with asking anyone but courts to adjudicate questions of accuracy were flagged by the Article 29 Working Party in its Costeja recommendations, saying that DPAs await should judicial determinations in cases of ongoing dispute about accuracy.  The GDPR nonetheless puts this responsibility in the hands of intermediaries.

The other basis for restriction is even broader, but harder to piece together from the GDPR text.  It arises when an intermediary’s initial processing of user-generated content took place on the basis of “legitimate interests” that outweighed the privacy rights of people mentioned or depicted in that content.  (Art. 6.1.(f)) Under data protection law, this is the legal basis for most, possibly all processing of such data by intermediaries.  So this basis for restriction seems to apply generally to intermediaries facing content removal requests.

To figure out whether to restrict in response to such a request, an intermediary must perform a multi-step, circular analysis, which hinges almost entirely on balancing poorly defined “legitimate interests.” This “legitimate interests” analysis is very similar to the analysis the intermediary is supposed to perform later, to decide whether to permanently erase the content.  The “legitimate interests” basis for temporary restriction should be different from the “legitimate interests” basis for permanent erasure, though.  The two analyses happen at different times, and an intermediary is supposed to restrict “pending the verification whether the legitimate grounds of the [intermediary] controller override those of the data subject,” i.e. restriction is provisional until the erasure decision is made.   (Council 17a(1)(c)).   But given the confusing similarity of the standards, and the clear intention that some content be restricted from public access right away, we should not be surprised to see quick and sloppy — and permanent — removal decisions being made  immediately upon receipt of challenges to online expression.

Similar  GDPR language in another draft, which may or may not mean the same thing, says restriction lasts until “the controller demonstrates compelling legitimate grounds for the processing which override the interests, fundamental rights and freedoms of the data subject.” (Parliament Art 19(1)). Here again, the standard for ending restriction is unclear.  It might boil down to the same vague, widely criticized standard set by the Court of Justice of the European Union (CJEU) in the Costeja “Right to Be Forgotten” case – but expressed in a lot more words.

c. Exceptions to the restriction requirement

Intermediaries can decline to restrict content “for reasons of important public interest” or to protect “the rights of another natural or legal person.” (Council 17a(3); another draft applies this exception only for rights of  “a natural person,” meaning a publishing or intermediary company’s interests would not qualify.  EDPS, Art. 19a.)  It’s unclear if these exceptions set a higher or lower bar than the “legitimate interests” standard intermediaries are supposed to apply at other points in their analysis of removal requests.   Arguably, these exceptions protect even less content than the CJEU’s Costeja standard: Costeja lets Google reject de-indexing requests based on the “preponderant interest of the general public,” while the GDPR lets intermediaries leave content up, during the time it takes to evaluate a removal request, based on an  “important public interest.”  (Costeja Par. 97)   Intermediaries willing to bet real money that they know the difference between “preponderant” and “important” can choose their actions accordingly.  Intermediaries flummoxed by these standards will simply take the content offline without additional review.

Another exception permits content to remain publicly available at this stage “in order to protect the rights of another natural or legal person”. This seems more promising.  Content removal requests, almost by definition, affect the rights of another person — the content’s publisher.  Intermediaries or publishers could even argue that every request to remove public content (as opposed to every request to erase user data in back-end storage) qualifies for this exception.   It is unrealistic to expect intermediaries broadly to take this position, of course, given uncertainty the about whether DPAs or courts would agree, and given that errors expose the company to fines heavy enough to sink a business.  But GDPR drafters could easily modify this part of the statute to protect online speakers from having legal content suppressed, by specifying that pre-review restriction is never appropriate in the case of user-generated content.

  1. The Intermediary Decides Whether to Permanently Remove the Content

The intermediary now comes to the crux of the issue:  Has the complainant made a claim strong enough to override the interests of the person who put the content online in the first place – as well as the interests of all the people who might want to see it, and the interests of the intermediary itself?  If so, the content gets erased.  I’ll write about how the GDPR shapes that substantive decision — the merits of the “Right to Be Forgotten” claim —  later.  One interesting procedural wrinkle is that, according to the Article 29 Working Party, in difficult cases the intermediary may at this stage consult with the user who uploaded the information.  I’ll talk later about the limited practical value of this possibility.  For now, I assume that the intermediary agrees to remove the content.    

  1. The Content is Removed

For a search index, presumably what the GDPR calls “erasure” is meant to instead be the more limited de-linking mandated in the Costeja case.  (The GDPR does not actually say this.  But if the GDPR rendered Costeja obsolete, a more expert data protection lawyer than I would surely have pointed it out by now.)  Following the CJEU’s ruling, this means removing links, titles, snippets and cache copies of the webpage from only certain search results: those generated by searching on the requester’s name.

For hosting platforms, complying with a GDPR removal request has far greater impact on Internet users’ free expression and access to information. (As discussed in the Introduction, I expect that hosts will be deemed controllers subject to GDPR erasure obligations.)  Deletion by a host often eliminates the only place on the Internet where particular material can be found.  In practice, for ephemeral content like social media posts, it is often the author’s only copy as well.  Content deleted by an Internet host may be well and truly gone from the world.  Given these dramatically greater consequences, the standards applied by hosts in making removal decisions should be very different than those applied by search engines – with much greater weight given to free expression concerns.  As a procedural matter, it also seems more than reasonable that a host should postpone final erasure until the speaker has an opportunity to defend her speech. But while the difference between de-indexing search results and deleting content at its source is widely commented upon in academic and policy discussions, I know of no written guidelines for hosts.  The GDPR provides none.

  1. The Requester and Downstream Recipients Are Told About the Removal

The primary person the intermediary must tell about removal is, of course, the requesting data subject. (Art. 12.2)  In addition, to help that person enforce his or her data protection rights, the intermediary must also pass information about the removal downstream, so that whoever received content from the intermediary can also delete it. (Art. 13)    If the intermediary has unlawfully made the data public, it must attempt to undo the damage by tracking down recipients and telling them to delete any copies or links.  (Art 17.2)

These pass-through provisions, while potentially valuable with respect to traditional data controllers — a hospital that shared patient information with an insurer, for example — are an odd fit for intermediaries.  The content they handle typically originates with a third party, and passes through the intermediary’s technical systems without human review.  If that content was “illegal” ab initio under the GDPR, perhaps because of special rules governing sensitive data, must the intermediary then ransack its logs to find and communicate deletion requests to other users who saw it?  Would the person requesting removal – say, someone who was the subject of an ugly Facebook post – even want to risk the potential Streisand Effect from this publicity?

  1. The Intermediary Discloses Identifying Information About the User Who Posted the Content

The GDPR also creates a troubling disclosure obligation in cases where the intermediary got the disputed content from someone other than the person requesting removal — which is the case in most notice and takedown situations.  The intermediary is supposed to tell the requester “the identity and the contact details of the controller” — in other words, the Internet user — who provided the content.  (Art. 14a Council)  While there are arguments that users posting on social media or other hosting platforms do not qualify as controllers, those arguments have fared poorly in court and in analysis from academics and the Article 29 Working Party. (See Ryneš and  Lindqvist cases, and discussion in Bezzi et al, Privacy and Data Management for Life, p. 70-71) Users who post their expression online are probably controllers, and the GDPR disclosure requirement probably applies to their personal data held by an intermediary.  The intermediary can be compelled, based on an unverified complaint, to unmask anonymous speakers — sharing their personal information without consent.

The disclosure requirement may be a sensible provision for traditional data controllers — say, a lender that shared information with a credit reporting agency.   But it is dangerous for online platforms.  It provides a means for companies and individuals to identify and potentially retaliate against people who say things about them that they do not like. The GDPR specifies no legal process or protection for the rights of those users, but does provide exceptions for cases in which “disclosure is expressly laid down by Union or Member State law to which the controller is subject” (14a.4(c) Council, sic).  Presumably this section is intended to limit disclosure of user data to situations where there is valid legal process and the disclosure complies with the legal protections of the GDPR itself.  But this is far from clear, and badly in need of redrafting to clearly prohibit disclosure absent adequate legal protection for the speaker.

I can only assume that the drafters were not considering this situation, or its tremendous impact on anonymous online speech, given their keen interest in anonymity and pseudonymity in other parts of the GDPR.  Here again, viewing the issue through the lens of intermediaries’ Notice and Takedown process illuminates disturbing consequences for Internet users who seek and impart information on the Internet.  And, again, simply excluding online content providers from this provision of the GDPR would solve an important problem.

  1. The Person Who Put Content Online Is Not Told of Its Erasure

Finally, there is the one person who is not supposed to be told about the removal: the person whose speech is being erased.  The GDPR leaves intact legal provisions that regulators have interpreted to prohibit notice to the content’s publisher under existing law.  In its guidelines for Google’s implementation of the Costeja decision, the Article 29 Working Party said that there is no legal basis for Google to routinely tell webmasters when their content is delisted.

The idea that the person who put content online should not know when it is erased or de-linked makes some sense from a pure data protection perspective. The idea is that the requester is exercising a legal right to make the company stop processing her information.  Talking to a publisher or webmaster about the request is just more unauthorized processing.  More pragmatically, a person whose privacy is violated by online content probably will not want the perpetrator to know of her efforts to remove it.

Viewed through the lens of intermediary liability, due process, or free expression rights, by contrast, this looks pretty outrageous.  It gives all procedural protections to the accuser, and none to the accused.  The resulting harms to individuals and companies are real: a small business can lose access to customers; a speaker can have her opinions silenced; all through a secret process which in most cases provides no notice or opportunity for defense.  There is a reason that intermediary liability model rules like the Manila Principles, and existing laws like the US DMCA, allow or even require companies to let users know when their content is deleted, and give users a chance to challenge removal decisions.  A response or “counternotice” from the content provider serves as an important check on both improper removal requests and intermediary error in processing them.  The risk of error — or laziness, or risk-aversion — by the intermediary is the reason why routine, pre- or post-removal notice to the accused Internet user is so important.  If notice only happens when the intermediary figures out that a removal request is problematic — as Article 29 suggested in its Guidelines — many improper deletions of legal content will go uncorrected.

Notice to the person who put the content online can lead to better decision-making, by bringing someone who knows the context and underlying facts — and who is motivated to defend her own expression — into the conversation.  Importantly, it also opens up possibilities for better, more proportionate, and well-tailored solutions.  While intermediaries have a binary choice – take content down or leave it up – a content creator can do much better: she can reword a bad phrase, update or annotate a news story, take down a picture from a blog post while leaving lawful text intact. She can preserve the good parts of her online speech while removing the bad.  Removal or correction of content at its source can also provide a better outcome for a person whose rights it violated, since the infringing content is no longer out there for people to find on other sites or shared by other means.

A number of courts have looked to content creators to take measures like this in the “Right to Be Forgotten” context. A recent, post-Costeja data protection case from the Constitutional Court of Colombia is an example.  After weighing the opposing fundamental rights at issue, the court ordered an online news source to (1) update its article about the plaintiff, and (2) use simple technical tools to prevent search engines from listing the information in search results.  Jef Ausloos and Aleksandra Kuczerawy report a similar case in Belgium.  The idea of putting this decision and technical control in the hands of the publisher is not new — well before Costeja, the Italian DPA did the same thing with archived news articles about old criminal convictions. (See discussion of cases at notes 121-123 in Tamò and George, Oblivion, Erasure and Forgetting in the Digital Age)

Giving publishers notice and opportunity to defend their online expression would make the GDPR’s removal process more fair; avoid unnecessary harm to free expression and information access online; and introduce better tools to redress privacy harms to the person requesting removal.  Right now, the GDPR is putting decisions about the publisher’s content in the hands of her accuser and a technology company, instead — and giving both of those parties incentives to disregard her rights.

The GDPR creates a process that fails to protect Internet users’ rights to free expression and access to information.  Simple text changes could eliminate many of these shortcomings, while still providing relief for people harmed by content online.      Lawmakers can and should make those changes while there is still time.

Daphne Keller is Director of Intermediary Liability at The Center for Internet and Society at Stanford Law School

Disclosure: Daphne Keller previously worked on “Right to Be Forgotten” issues as Associate General Counsel at Google. 

Request: Comments and feedback on this analysis are very welcome, here or to  on Twitter @daphnehk

A version of this post was originally published on the Stanford Center for Internet and Society blog and the Internet Policy Review News & Comments.


Actions

Information

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s




%d bloggers like this: