Archives

Damian Collins

UK names its pick for social media ‘harms’ watchdog

The UK government has taken the next step in its grand policymaking challenge to tame the worst excesses of social media by regulating a broad range of online harms — naming the existing communications watchdog, Ofcom, as its preferred pick for enforcing rules around ‘harmful speech’ on platforms such as Facebook, Snapchat and TikTok in future.

Last April the previous Conservative-led government laid out populist but controversial proposals to legislate to lay a duty of care on Internet platforms — responding to growing public concern about the types of content kids are being exposed to online.

Its white paper covers a broad range of online content — from terrorism, violence and hate speech, to child exploitation, self-harm/suicide, cyber bullying, disinformation and age-inappropriate material — with the government setting out a plan to require platforms to take “reasonable” steps to protect their users from a range of harms.

However digital and civil rights campaigners warn the plan will have a huge impact on online speech and privacy, arguing it will put a legal requirement on platforms to closely monitor all users and apply speech-chilling filtering technologies on uploads in order to comply with very broadly defined concepts of harm — dubbing it state censorship. Legal experts are also critical.

The (now) Conservative majority government has nonetheless said it remains committed to the legislation.

Today it responded to some of the concerns being raised about the plan’s impact on freedom of expression, publishing a partial response to the public consultation on the Online Harms White Paper, although a draft bill remains pending, with no timeline confirmed.

“Safeguards for freedom of expression have been built in throughout the framework,” the government writes in an executive summary. “Rather than requiring the removal of specific pieces of legal content, regulation will focus on the wider systems and processes that platforms have in place to deal with online harms, while maintaining a proportionate and risk-based approach.”

It says it’s planning to set a different bar for content deemed illegal vs content that has “potential to cause harm”, with the heaviest content removal requirements being planned for terrorist and child sexual exploitation content. Whereas companies will not be forced to remove “specific pieces of legal content”, as the government puts it.

Ofcom, as the online harms regulator, will also not be investigating or adjudicating on “individual complaints”.

“The new regulatory framework will instead require companies, where relevant, to explicitly state what content and behaviour they deem to be acceptable on their sites and enforce this consistently and transparently. All companies in scope will need to ensure a higher level of protection for children, and take reasonable steps to protect them from inappropriate or harmful content,” it writes.

“Companies will be able to decide what type of legal content or behaviour is acceptable on their services, but must take reasonable steps to protect children from harm. They will need to set this out in clear and accessible terms and conditions and enforce these effectively, consistently and transparently. The proposed approach will improve transparency for users about which content is and is not acceptable on different platforms, and will enhance users’ ability to challenge removal of content where this occurs.”

Another requirement will be that companies have “effective and proportionate user redress mechanisms” — enabling users to report harmful content and challenge content takedown “where necessary”.

“This will give users clearer, more effective and more accessible avenues to question content takedown, which is an important safeguard for the right to freedom of expression,” the government suggests, adding that: “These processes will need to be transparent, in line with terms and conditions, and consistently applied.”

Ministers say they have not yet made a decision on what kind of liability senior management of covered businesses may face under the planned law, nor on additional business disruption measures — with the government saying it will set out its final policy position in the Spring.

“We recognise the importance of the regulator having a range of enforcement powers that it uses in a fair, proportionate and transparent way. It is equally essential that company executives are sufficiently incentivised to take online safety seriously and that the regulator can take action when they fail to do so,” it writes.

It’s also not clear how businesses will be assessed as being in (or out of) scope of the regulation.

“Just because a business has a social media page that does not bring it in scope of regulation,” the government response notes. “To be in scope, a business would have to operate its own website with the functionality to enable sharing of user-generated content, or user interactions. We will introduce this legislation proportionately, minimising the regulatory burden on small businesses. Most small businesses where there is a lower risk of harm occurring will not have to make disproportionately burdensome changes to their service to be compliant with the proposed regulation.”

The government is clear in the response that Online harms remains “a key legislative priority”.

“We have a comprehensive programme of work planned to ensure that we keep momentum until legislation is introduced as soon as parliamentary time allows,” it writes, describing today’s response report “an iterative step as we consider how best to approach this complex and important issue” — and adding: “We will continue to engage closely with industry and civil society as we finalise the remaining policy.”

Incoming in the meanwhile the government says it’s working on a package of measures “to ensure progress now on online safety” — including interim codes of practice, including guidance for companies on tackling terrorist and child sexual abuse and exploitation content online; an annual government transparency report, which it says it will publish “in the next few months”; and a media literacy strategy, to support public awareness of online security and privacy.

It adds that it expects social media platforms to “take action now to tackle harmful content or activity on their services” — ahead of the more formal requirements coming in.

Facebook-owned Instagram has come in for high level pressure from ministers over how it handles content promoting self-harm and suicide after the media picked up on a campaign by the family of a schoolgirl who killed herself after been exposed to Instagram content encouraging self-harm.

Instagram subsequently announced changes to its policies for handling content that encourages or depicts self harm/suicide — saying it would limit how it could be accessed. This later morphed into a ban on some of this content.

The government said today that companies offering online services that involve user generated content or user interactions are expected to make use of what it dubs “a proportionate range of tools” — including age assurance, and age verification technologies — to prevent kids from accessing age-inappropriate content and “protect them from other harms”.

This is also the piece of the planned legislation intended to pick up the baton of the Digital Economy Act’s porn block proposals — which the government dropped last year, saying it would bake equivalent measures into the forthcoming Online Harms legislation.

The Home Office has been consulting with social media companies on devising robust age verification technologies for many months.

In its own response statement today, Ofcom — which would be responsible for policy detail under the current proposals — said it will work with the government to ensure “any regulation provides effective protection for people online”, and, pending appointment, “consider what we can do before legislation is passed”.

The Online Harms plan is not the online Internet-related work ongoing in Whitehall, with ministers noting that: “Work on electoral integrity and related online transparency issues is being taken forward as part of the Defending Democracy programme together with the Cabinet Office.”

Back in 2018 a UK parliamentary committee called for a levy on social media platforms to fund digital literacy programs to combat online disinformation and defend democratic processes, during an enquiry into the use of social media for digital campaigning. However the UK government has been slower to act on this front.

The former chair of the DCMS committee, Damian Collins, called today for any future social media regulator to have “real powers in law” — including the ability to “investigate and apply sanctions to companies which fail to meet their obligations”.

In the DCMS committee’s final report parliamentarians called for Facebook’s business to be investigated, raising competition and privacy concerns.

British parliament presses Facebook on letting politicians lie in ads

In yet another letter seeking to pry accountability from Facebook, the chair of a British parliamentary committee has pressed the company over its decision to adopt a policy on political ad that supports flagrant lying.

In the letter Damian Collins, chair of the DCMS committee, asks the company to explain why it recently took the decision to change its policy regarding political ads — “given the heavy constraint this will place on Facebook’s ability to combat online disinformation in the run-up to elections around the world”.

“The change in policy will absolve Facebook from the responsibility of identifying and tackling the widespread content of bad actors, such as Russia’s Internet Research Agency,” he warns, before going on to cite a recent tweet by the former chief of Facebook’s global efforts around political ads transparency and election integrity  who has claimed that senior management ignored calls from lower down for ads to be scanned for misinformation.

“I also note that Facebook’s former head of global elections integrity ops, Yael Eisenstat, has described that when she advocated for the scanning of adverts to detect misinformation efforts, despite engineers’ enthusiasm she faced opposition from upper management,” writes Collins.

 

In a further question, Collins asks what specific proposals Eisenstat’s team made; to what extent Facebook determined them to be feasible; and on what grounds were they not progressed.

He also asks what plans Facebook has to formalize a working relationship with fact-checkers over the long run.

A Facebook spokesperson declined to comment on the DCMS letter, saying the company would respond in due course.

In a naked display of its platform’s power and political muscle, Facebook deployed a former politician to endorse its ‘fake ads are fine’ position last month — when head of global policy and communication, Nick Clegg, who used to be the deputy prime minister of the UK, said: ” We do not submit speech by politicians to our independent fact-checkers, and we generally allow it on the platform even when it would otherwise breach our normal content rules.”

So, in other words, if you’re a politician you get a green light to run lying ads on Facebook.

Clegg was giving a speech on the company’s plans to prevent interference in the 2020 US presidential election. The only line he said Facebook would be willing to draw was if a politician’s speech “can lead to real world violence and harm”. But from a company that abjectly failed to prevent its platform from being misappropriated to accelerate genocide in Myanmar that’s the opposite of reassuring.

“At Facebook, our role is to make sure there is a level playing field, not to be a political participant ourselves,” said Clegg. “We have a responsibility to protect the platform from outside interference, and to make sure that when people pay us for political ads we make it as transparent as possible. But it is not our role to intervene when politicians speak.”

In truth Facebook roundly fails to protect its platform from outside interference too. Inauthentic behavior and fake content is a ceaseless firefight that Facebook is nowhere close to being on top of, let alone winning. But on political ads it’s not even going to try — giving politicians around the world carte blanche to use outrage-fuelling disinformation and racist dogwhistles as a low budget, broad reach campaign strategy.

We’ve seen this before on Facebook of course, during the UK’s Brexit referendum — when scores of dark ads sought to whip up anti-immigrant sentiment and drive a wedge between voters and the European Union.

And indeed Collins’ crusade against Facebook as a conduit for disinformation began in the wake of that 2016 EU referendum.

Since then the company has faced major political scrutiny over how it accelerates disinformation — and has responded by creating a degree of transparency on political ads, launching an archive where this type of advert can be searched. But that appears as far as Facebook is willing to go on tackling the malicious propaganda problem its platform accelerates.

In the US, senator Elizabeth Warren has been duking it out publicly with Facebook on the same point as Collins rather more directly — by running ads on Facebook saying it’s endorsing Trump by supporting his lies.

There’s no sign of Facebook backing down, though. On the contrary. A recent leak from an internal meeting saw founder Mark Zuckerberg attacking Warren as an “existential” threat to the company. While, this week, Bloomberg reports that Facebook’s executive has been quietly advising a Warren rival for the Democratic nomination, Pete Buttigieg, on campaign hires.

So a company that hires politicians to senior roles, advises high profile politicians on election campaigns, tweaks its policy on political ads after a closed door meeting with the current holder of the office of US president, Donald Trump, and ignores internal calls to robustly police political ads, is rapidly sloughing off any residual claims to be ‘just a technology company’. (Though, really, we knew that already.)

In the letter Collins also presses Facebook on its plan to rollout end-to-end encryption across its messaging app suite, asking why it can’t limit the tech to WhatsApp only — something the UK government has also been pressing it on this month.

He also raises questions about Facebook’s access to metadata — asking whether it will use inferences gleaned from the who, when and where of e2e encrypted comms (even though it can’t access the what) to target users with ads.

Facebook’s self-proclaimed ‘pivot to privacy‘ — when it announced earlier this year a plan to unify its separate messaging platforms onto a single e2e encrypted backend — has been widely interpreted as an attempt to make it harder for antitrust regulators to break up its business empire, as well as a strategy to shirk responsibility for content moderation by shielding itself from much of the substance that flows across its platform while retaining access to richer cross-platform metadata so it can continue to target users with ads…

Loot boxes in games are gambling and should be banned for kids, say UK MPs

UK MPs have called for the government to regulate the games industry’s use of loot boxes under current gambling legislation — urging a blanket ban on the sale of loot boxes to players who are children.

Kids should instead be able to earn in-game credits to unlock look boxes, MPs have suggested in a recommendation that won’t be music to the games industry’s ears.

Loot boxes refer to virtual items in games that can be bought with real-world money and do not reveal their contents in advance. The MPs argue the mechanic should be considered games of chance played for money’s worth and regulated by the UK Gambling Act.

The Department for Digital, Culture, Media and Sport’s (DCMS) parliamentary committee makes the recommendations in a report published today following an enquiry into immersive and addictive technologies that saw it take evidence from a number of tech companies including Fortnite maker Epic Games; Facebook-owned Instagram; and Snapchap.

The committee said it found representatives from the games industry to be “wilfully obtuse” in answering questions about typical patterns of play — data the report emphasizes is necessary for proper understanding of how players are engaging with games — as well as calling out some games and social media company representatives for demonstrating “a lack of honesty and transparency”, leading it to question what the companies have to hide.

“The potential harms outlined in this report can be considered the direct result of the way in which the ‘attention economy’ is driven by the objective of maximising user engagement,” the committee writes in a summary of the report which it says explores “how data-rich immersive technologies are driven by business models that combine people’s data with design practices to have powerful psychological effects”.

As well as trying to pry information about of games companies, MPs also took evidence from gamers during the course of the enquiry.

In one instance the committee heard that a gamer spent up to £1,000 per year on loot box mechanics in Electronic Arts’s Fifa series.

A member of the public also reported that their adult son had built up debts of more than £50,000 through spending on microtransactions in online game RuneScape. The maker of that game, Jagex, told the committee that players “can potentially spend up to £1,000 a week or £5,000 a month”.

In addition to calling for gambling law to be applied to the industry’s lucrative loot box mechanic, the report calls on games makers to face up to responsibilities to protect players from potential harms, saying research into possible negative psychosocial harms has been hampered by the industry’s unwillingness to share play data.

“Data on how long people play games for is essential to understand what normal and healthy — and, conversely, abnormal and potentially unhealthy — engagement with gaming looks like. Games companies collect this information for their own marketing and design purposes; however, in evidence to us, representatives from the games industry were wilfully obtuse in answering our questions about typical patterns of play,” it writes.

“Although the vast majority of people who play games find it a positive experience, the minority who struggle to maintain control over how much they are playing experience serious consequences for them and their loved ones. At present, the games industry has not sufficiently accepted responsibility for either understanding or preventing this harm. Moreover, both policy-making and potential industry interventions are being hindered by a lack of robust evidence, which in part stems from companies’ unwillingness to share data about patterns of play.”

The report recommends the government require games makers share aggregated player data with researchers, with the committee calling for a new regulator to oversee a levy on the industry to fund independent academic research — including into ‘Gaming disorder‘, an addictive condition formally designated by the World Health Organization — and to ensure that “the relevant data is made available from the industry to enable it to be effective”.

“Social media platforms and online games makers are locked in a relentless battle to capture ever more of people’s attention, time and money. Their business models are built on this, but it’s time for them to be more responsible in dealing with the harms these technologies can cause for some users,” said DCMS committee chair, Damian Collins, in a statement.

“Loot boxes are particularly lucrative for games companies but come at a high cost, particularly for problem gamblers, while exposing children to potential harm. Buying a loot box is playing a game of chance and it is high time the gambling laws caught up. We challenge the Government to explain why loot boxes should be exempt from the Gambling Act.

“Gaming contributes to a global industry that generates billions in revenue. It is unacceptable that some companies with millions of users and children among them should be so ill-equipped to talk to us about the potential harm of their products. Gaming disorder based on excessive and addictive game play has been recognised by the World Health Organisation. It’s time for games companies to use the huge quantities of data they gather about their players, to do more to proactively identify vulnerable gamers.”

The committee wants independent research to inform the development of a behavioural design code of practice for online services. “This should be developed within an adequate timeframe to inform the future online harms regulator’s work around ‘designed addiction’ and ‘excessive screen time’,” it writes, citing the government’s plan for a new Internet regulator for online harms.

MPs are also concerned about the lack of robust age verification to keep children off age-restricted platforms and games.

The report identifies inconsistencies in the games industry’s ‘age-ratings’ stemming from self-regulation around the distribution of games (such as online games not being subject to a legally enforceable age-rating system, meaning voluntary ratings are used instead).

“Games companies should not assume that the responsibility to enforce age-ratings applies exclusively to the main delivery platforms: All companies and platforms that are making games available online should uphold the highest standards of enforcing age-ratings,” the committee writes on that.

“Both games companies and the social media platforms need to establish effective age verification tools. They currently do not exist on any of the major platforms which rely on self-certification from children and adults,” Collins adds.

During the enquiry it emerged that the UK government is working with tech companies including Snap to try to devise a centralized system for age verification for online platforms.

A section of the report on Effective Age Verification cites testimony from deputy information commissioner Steve Wood raising concerns about any move towards “wide-spread age verification [by] collecting hard identifiers from people, like scans of passports”.

Wood instead pointed the committee towards technological alternatives, such as age estimation, which he said uses “algorithms running behind the scenes using different types of data linked to the self-declaration of the age to work out whether this person is the age they say they are when they are on the platform”.

Snapchat’s Will Scougal also told the committee that its platform is able to monitor user signals to ensure users are the appropriate age — by tracking behavior and activity; location; and connections between users to flag a user as potentially underage. 

The report also makes a recommendation on deepfake content, with the committee saying that malicious creation and distribution of deepfake videos should be regarded as harmful content.

“The release of content like this could try to influence the outcome of elections and undermine people’s public reputation,” it warns. “Social media platforms should have clear policies in place for the removal of deepfakes. In the UK, the Government should include action against deepfakes as part of the duty of care social media companies should exercise in the interests of their users, as set out in the Online Harms White Paper.”

“Social media firms need to take action against known deepfake films, particularly when they have been designed to distort the appearance of people in an attempt to maliciously damage their public reputation, as was seen with the recent film of the Speaker of the US House of Representatives, Nancy Pelosi,” adds Collins.