Archives

Uk Government

UK names its pick for social media ‘harms’ watchdog

The UK government has taken the next step in its grand policymaking challenge to tame the worst excesses of social media by regulating a broad range of online harms — naming the existing communications watchdog, Ofcom, as its preferred pick for enforcing rules around ‘harmful speech’ on platforms such as Facebook, Snapchat and TikTok in future.

Last April the previous Conservative-led government laid out populist but controversial proposals to legislate to lay a duty of care on Internet platforms — responding to growing public concern about the types of content kids are being exposed to online.

Its white paper covers a broad range of online content — from terrorism, violence and hate speech, to child exploitation, self-harm/suicide, cyber bullying, disinformation and age-inappropriate material — with the government setting out a plan to require platforms to take “reasonable” steps to protect their users from a range of harms.

However digital and civil rights campaigners warn the plan will have a huge impact on online speech and privacy, arguing it will put a legal requirement on platforms to closely monitor all users and apply speech-chilling filtering technologies on uploads in order to comply with very broadly defined concepts of harm — dubbing it state censorship. Legal experts are also critical.

The (now) Conservative majority government has nonetheless said it remains committed to the legislation.

Today it responded to some of the concerns being raised about the plan’s impact on freedom of expression, publishing a partial response to the public consultation on the Online Harms White Paper, although a draft bill remains pending, with no timeline confirmed.

“Safeguards for freedom of expression have been built in throughout the framework,” the government writes in an executive summary. “Rather than requiring the removal of specific pieces of legal content, regulation will focus on the wider systems and processes that platforms have in place to deal with online harms, while maintaining a proportionate and risk-based approach.”

It says it’s planning to set a different bar for content deemed illegal vs content that has “potential to cause harm”, with the heaviest content removal requirements being planned for terrorist and child sexual exploitation content. Whereas companies will not be forced to remove “specific pieces of legal content”, as the government puts it.

Ofcom, as the online harms regulator, will also not be investigating or adjudicating on “individual complaints”.

“The new regulatory framework will instead require companies, where relevant, to explicitly state what content and behaviour they deem to be acceptable on their sites and enforce this consistently and transparently. All companies in scope will need to ensure a higher level of protection for children, and take reasonable steps to protect them from inappropriate or harmful content,” it writes.

“Companies will be able to decide what type of legal content or behaviour is acceptable on their services, but must take reasonable steps to protect children from harm. They will need to set this out in clear and accessible terms and conditions and enforce these effectively, consistently and transparently. The proposed approach will improve transparency for users about which content is and is not acceptable on different platforms, and will enhance users’ ability to challenge removal of content where this occurs.”

Another requirement will be that companies have “effective and proportionate user redress mechanisms” — enabling users to report harmful content and challenge content takedown “where necessary”.

“This will give users clearer, more effective and more accessible avenues to question content takedown, which is an important safeguard for the right to freedom of expression,” the government suggests, adding that: “These processes will need to be transparent, in line with terms and conditions, and consistently applied.”

Ministers say they have not yet made a decision on what kind of liability senior management of covered businesses may face under the planned law, nor on additional business disruption measures — with the government saying it will set out its final policy position in the Spring.

“We recognise the importance of the regulator having a range of enforcement powers that it uses in a fair, proportionate and transparent way. It is equally essential that company executives are sufficiently incentivised to take online safety seriously and that the regulator can take action when they fail to do so,” it writes.

It’s also not clear how businesses will be assessed as being in (or out of) scope of the regulation.

“Just because a business has a social media page that does not bring it in scope of regulation,” the government response notes. “To be in scope, a business would have to operate its own website with the functionality to enable sharing of user-generated content, or user interactions. We will introduce this legislation proportionately, minimising the regulatory burden on small businesses. Most small businesses where there is a lower risk of harm occurring will not have to make disproportionately burdensome changes to their service to be compliant with the proposed regulation.”

The government is clear in the response that Online harms remains “a key legislative priority”.

“We have a comprehensive programme of work planned to ensure that we keep momentum until legislation is introduced as soon as parliamentary time allows,” it writes, describing today’s response report “an iterative step as we consider how best to approach this complex and important issue” — and adding: “We will continue to engage closely with industry and civil society as we finalise the remaining policy.”

Incoming in the meanwhile the government says it’s working on a package of measures “to ensure progress now on online safety” — including interim codes of practice, including guidance for companies on tackling terrorist and child sexual abuse and exploitation content online; an annual government transparency report, which it says it will publish “in the next few months”; and a media literacy strategy, to support public awareness of online security and privacy.

It adds that it expects social media platforms to “take action now to tackle harmful content or activity on their services” — ahead of the more formal requirements coming in.

Facebook-owned Instagram has come in for high level pressure from ministers over how it handles content promoting self-harm and suicide after the media picked up on a campaign by the family of a schoolgirl who killed herself after been exposed to Instagram content encouraging self-harm.

Instagram subsequently announced changes to its policies for handling content that encourages or depicts self harm/suicide — saying it would limit how it could be accessed. This later morphed into a ban on some of this content.

The government said today that companies offering online services that involve user generated content or user interactions are expected to make use of what it dubs “a proportionate range of tools” — including age assurance, and age verification technologies — to prevent kids from accessing age-inappropriate content and “protect them from other harms”.

This is also the piece of the planned legislation intended to pick up the baton of the Digital Economy Act’s porn block proposals — which the government dropped last year, saying it would bake equivalent measures into the forthcoming Online Harms legislation.

The Home Office has been consulting with social media companies on devising robust age verification technologies for many months.

In its own response statement today, Ofcom — which would be responsible for policy detail under the current proposals — said it will work with the government to ensure “any regulation provides effective protection for people online”, and, pending appointment, “consider what we can do before legislation is passed”.

The Online Harms plan is not the online Internet-related work ongoing in Whitehall, with ministers noting that: “Work on electoral integrity and related online transparency issues is being taken forward as part of the Defending Democracy programme together with the Cabinet Office.”

Back in 2018 a UK parliamentary committee called for a levy on social media platforms to fund digital literacy programs to combat online disinformation and defend democratic processes, during an enquiry into the use of social media for digital campaigning. However the UK government has been slower to act on this front.

The former chair of the DCMS committee, Damian Collins, called today for any future social media regulator to have “real powers in law” — including the ability to “investigate and apply sanctions to companies which fail to meet their obligations”.

In the DCMS committee’s final report parliamentarians called for Facebook’s business to be investigated, raising competition and privacy concerns.

Blackbox welfare fraud detection system breaches human rights, Dutch court rules

An algorithmic risk scoring system deployed by the Dutch state to try to predict the likelihood that social security claimants will commit benefits or tax fraud breaches human rights law, a court in the Netherlands has ruled.

The Dutch government’s System Risk Indication (SyRI) legislation uses a non-disclosed algorithmic risk model to profile citizens and has been exclusively targeted at neighborhoods with mostly low-income and minority residents. Human rights campaigners have dubbed it a ‘welfare surveillance state’.

A number of civil society organizations in the Netherlands and two citizens instigated the legal action against SyRI — seeking to block its use. The court has today ordered an immediate halt to the use of the system.

The ruling is being hailed as a landmark judgement by human rights campaigners, with the court basing its reasoning on European human rights law — specifically the right to a private life that’s set out by Article 8 of the European Convention on Human Rights (ECHR) — rather than a dedicated provision in the EU’s data protection framework (GDPR) which relates to automated processing.

GDPR’s Article 22 includes the right for individuals not to be subject to solely automated individual decision-making where they can produce significant legal effects. But there can be some fuzziness around whether this applies if there’s a human somewhere in the loop, such as to review a decision on objection.

In this instance the court has sidestepped such questions by finding SyRI directly interferes with rights set out in the ECHR.

Specifically, the court found that the SyRI legislation fails a balancing test in Article 8 of the ECHR which requires that any social interest to be weighed against the violation of individuals’ private life — with a fair and reasonable balance being required.

In its current form the automated risk assessment system failed this test, in the court’s view.

Legal experts suggest the decision sets some clear limits on how the public sector in the UK can make use of AI tools — with the court objecting in particular to the lack of transparency about how the algorithmic risk scoring system functioned.

In a press release about the judgement (translated to English using Google Translate) the court writes that the use of SyRI is “insufficiently clear and controllable”. While, per Human Rights Watch, the Dutch government refused during the hearing to disclose “meaningful information” about how SyRI uses personal data to draw inferences about possible fraud.

The court clearly took a dim view of the state trying to circumvent scrutiny of human rights risk by pointing to an algorithmic ‘blackbox’ and shrugging.

The UN special rapporteur on extreme poverty and human rights, Philip Alston — who intervened in the case by providing the court with a human rights analysis — welcomed the judgement, describing it as “a clear victory for all those who are justifiably concerned about the serious threats digital welfare systems pose for human rights”.

“This decision sets a strong legal precedent for other courts to follow. This is one of the first times a court anywhere has stopped the use of digital technologies and abundant digital information by welfare authorities on human rights grounds,” he added in a press statement.

Back in 2018 Alston warned that the UK government’s rush to apply digital technologies and data tools to socially re-engineer the delivery of public services at scale risked having an immense impact on the human rights of the most vulnerable.

So the decision by the Dutch court could have some near-term implications for UK policy in this area.

The judgement does not shut the door on the use by states of automated profiling systems entirely — but does make it clear that in Europe human rights law must be central to the design and implementation of rights risking tools.

It also comes at a key time when EU policymakers are working on a framework to regulate artificial intelligence — with the Commission pledging to devise rules that ensure AI technologies are applied ethically and in a human-centric way.

It remains to be seen whether the Commission will push for pan-EU limits on specific public sector uses of AI — such as for social security assessments. A recent leaked draft of a white paper on AI regulation suggests it’s leaning towards risk-assessments and a patchwork of risk-based rules. 

UK Council websites are letting citizens be profiled for ads, study shows

On the same day that a data ethics advisor to the UK government has urged action to regulate online targeting a study conducted by pro-privacy browser Brave has highlighted how Brits are being profiled by the behavioral ad industry when they visit their local Council’s website — perhaps seeking info on local services or guidance about benefits including potentially sensitive information related to addiction services or disabilities.

Brave found that nearly all UK Councils permit at least one company to learn about the behavior of people visiting their sites, finding that a full 409 Councils exposed some visitor data to private companies.

While many large councils (serving 300,000+ people) were found exposing site visitors to what Brave describes as “extensive tracking and data collection by private companies” — with the worst offenders, London’s Enfield and Sheffield City Councils, exposing visitors to 25 data collectors apiece.

Brave argues the findings represent a conservative illustration of how much commercial tracking and profiling of visitors is going on on public sector websites — a floor, rather than a ceiling — given it was only studying landing pages of Council sites without any user interaction, and could only pick up known trackers (nor could the study look at how data is passed between tracking and data brokering companies).

Nor is the first such study to warn that public sector websites are infested with for-profit adtech. A report last year by Cookiebot found users of public sector and government websites in the EU being tracked when they performed health-related searches — including queries related to HIV, mental health, pregnancy, alcoholism and cancer.

Brave’s study — which was carried out using the webxray tool — found that almost all (98%) of the Councils used Google systems, with the report noting that the tech giant owns all five of the top embedded elements loaded by Council websites, which it suggests gives the company a god-like view of how UK citizens are interacting with their local authorities online.

The analysis also found 198 of the Council websites use the real-time bidding (RTB) form of programmatic online advertising. This is notable because RTB is the subject of a number of data protection complaints across the European Union — including in the UK, where the Information Commissioner’s Office (ICO) itself has been warning the adtech industry for more than half a year that its current processes are in breach of data protection laws.

However the UK watchdog has preferred to bark softly in the industry’s general direction over its RTB problem, instead of taking any enforcement action — a response that’s been dubbed “disastrous” by privacy campaigners.

One of the smaller RTB players the report highlights — which calls itself the Council Advertising Network (CAN) — was found sharing people’s data from 34 Council websites with 22 companies, which could then be insecurely broadcasting it on to hundreds or more entities in the bid chain.

Slides from a CAN media pack refer to “budget conscious” direct marketing opportunities via the ability to target visitors to Council websites accessing pages about benefits, child care and free local activities; “disability” marketing opportunities via the ability to target visitors to Council websites accessing pages such as home care, blue badges and community and social services; and “key life stages” marketing  opportunities via the ability to target visitors to Council websites accessing pages related to moving home, having a baby, getting married or losing a loved one.

Brave’s report — while a clearly stated promotion for its own anti-tracking browser (given it’s a commercial player too) — should be seen in the context of the ICO’s ongoing failure to take enforcement action against RTB abuses. It’s therefore an attempt to increase pressure on the regulator to act by further illuminating a complex industry which has used a lack of transparency to shield massive rights abuses and continues to benefit from a lack of enforcement of Europe’s General Data Protection Regulation.

And a low level of public understanding of how all the pieces in the adtech chain fit together and sum to a dysfunctional whole, where public services are turned against the citizens whose taxes fund them to track and target people for exploitative ads, likely contributes to discouraging sharper regulatory action.

But, as the saying goes, sunlight disinfects.

Asked what steps he would like the regulator to take, Brave’s chief policy officer, Dr Johnny Ryan, told TechCrunch: “I want the ICO to use its powers of enforcement to end the UK’s largest data breach. That data breach continues, and two years to the day after I first blew the whistle about RTB, Simon McDougall wrote a blog post accepting Google and the IAB’s empty gestures as acts of substance. It is time for the ICO to move this over to its enforcement team, and stop wasting time.”

We’re reached out to the ICO for a response to the report’s findings.