Automated censorship is not the answer to extremism: unbalanced Home Affairs Committee recommendations threaten free expression – Jim Killock

3 05 2017

The report by the Home Affairs Select Committee  published on 1 May 2017 brands social media companies as behaving irresponsibly in failing to remove extremist material.

It takes the view that the job of removing illegal extremist videos and postings is entirely the responsibility of the companies and does not envisage a role for courts to adjudicate what is in fact legal or not.

This is a complex issue, where the companies have to take responsibility for content on their platforms from many perspectives, including public expectation. There are legitimate concerns.

The approaches the committee advocates is however extremely unbalanced and could provoke a regime of automated censorship, that would impact legal content including material opposing extremism.

We deal below with two of the recommendations in the report to give some indication of how problematic the report is.

  • Government should consult on stronger law and system of fines for companies that fail to remove illegal content

Platforms receive reports from people about content; the committee assume this content can be regarded as illegal. Sometimes it may be obvious. However, not every video or graphic will be “obviously” illegal. Who then decides that there is a duty to remove material? Is it the complainant, the platform, or the original publisher? Or an independent third party such as a court?

The comparison with copyright is enlightening here. Copyright owners must identify material and assert their rights: even when automatic content matching is used, a human must assert the owner’s rights to take down a Youtube video. Of course, the video’s author can object. Meanwhile, this system is prone to all kinds of errors.

However, there is a clear line of accountability for all its faults. The copyright owner is responsible for asserting a breach of copyright; the author is responsible for defending their right to publish; and both accept that a court must decide in the event of a dispute.

With child abuse material, there is a similar expectation that material is reviewed by the IWF who make a decision about the legality or otherwise. It is not up to the public to report directly to companies.

None of this need for accountability and process is reflected in the HASC report, which merely asserts that reports of terrorist content by non-interested persons should create a liability on the platform.

Ultimately, fines for failure to remove content as suggested by the committee could only be reasonable if the reports had been made through a robust process and it was clear that the material was in fact in breach of the law.

  • Social media companies that fail to proactively search for and remove illegal material should pay towards costs of the police doing so instead

There is always a case for general taxation that could be used for the police. However, hypothecated resources in cases like this are liable to generate more and more calls for specific “Internet taxes” to deal with problems that can be blamed on companies, even when they have little to do with the activity in reality.

We should ask: is the posting of terrorist content a problem generated by the platforms, or by other wider social problems? It is not entirely obvious that this problem has in some way been produced by social media companies. It is clear that extremists use these platforms, just as they use transport, mail and phones. It appears to be the visibility of extremists activities that is attracting attention and blame on platforms, rather than an objective link between the aims of Twitter and Facebook and terrorists.

We might also ask: despite the apparent volumes of content that is posted and reposted, how much attention does it really get? This is important to know if we are trying to assess how to deal with the problem

Proactive searching by companies is something the Select Committee ought to be cautious about. This is inevitably error prone. It can only lead one way, which is to over-zealous matching, for fear that content is not removed. In the case of extremist content, it is perfectly reasonable to assume that content opposing extremism while quoting or reusing propagandist content would be identified and removed.

The incentives that the Committee proposes would lead to censorship of legal material by machines. The Select Committee report fails to mention or examine this, assuming instead that technology will provide the answers.

This post originally appeared on the Open Rights Group blog and is reproduced with permission and thanks

Advertisements

Actions

Information

3 responses

3 05 2017
Automated censorship is not the answer to extremism unbalanced Home Affairs Committee recommendations threaten free expression Jim Killock - Real Media - The News You Don't See

[…] The report by the Home Affairs Select Committee  published on 1 May 2017 brands social media companies as behaving irresponsibly in failing to remove extremist material. It takes the view that the job of removing illegal extremist videos and postings is entirely the responsibility of the companies Inforrm’s Blog […]

4 05 2017
daveyone1
31 05 2017
Zensurgeschichten | Themen & Essays

[…] unter hohen Geldstrafen zur automatischen Suspendierung von Blogs zwingen wollen, siehe z.B. hier. Doch ein Programm, das (noch) unzuverlässig arbeitet, sollte bei der vorliegenden Masse an […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s




%d bloggers like this: