Archives

DCMS committee

UK names its pick for social media ‘harms’ watchdog

The UK government has taken the next step in its grand policymaking challenge to tame the worst excesses of social media by regulating a broad range of online harms — naming the existing communications watchdog, Ofcom, as its preferred pick for enforcing rules around ‘harmful speech’ on platforms such as Facebook, Snapchat and TikTok in future.

Last April the previous Conservative-led government laid out populist but controversial proposals to legislate to lay a duty of care on Internet platforms — responding to growing public concern about the types of content kids are being exposed to online.

Its white paper covers a broad range of online content — from terrorism, violence and hate speech, to child exploitation, self-harm/suicide, cyber bullying, disinformation and age-inappropriate material — with the government setting out a plan to require platforms to take “reasonable” steps to protect their users from a range of harms.

However digital and civil rights campaigners warn the plan will have a huge impact on online speech and privacy, arguing it will put a legal requirement on platforms to closely monitor all users and apply speech-chilling filtering technologies on uploads in order to comply with very broadly defined concepts of harm — dubbing it state censorship. Legal experts are also critical.

The (now) Conservative majority government has nonetheless said it remains committed to the legislation.

Today it responded to some of the concerns being raised about the plan’s impact on freedom of expression, publishing a partial response to the public consultation on the Online Harms White Paper, although a draft bill remains pending, with no timeline confirmed.

“Safeguards for freedom of expression have been built in throughout the framework,” the government writes in an executive summary. “Rather than requiring the removal of specific pieces of legal content, regulation will focus on the wider systems and processes that platforms have in place to deal with online harms, while maintaining a proportionate and risk-based approach.”

It says it’s planning to set a different bar for content deemed illegal vs content that has “potential to cause harm”, with the heaviest content removal requirements being planned for terrorist and child sexual exploitation content. Whereas companies will not be forced to remove “specific pieces of legal content”, as the government puts it.

Ofcom, as the online harms regulator, will also not be investigating or adjudicating on “individual complaints”.

“The new regulatory framework will instead require companies, where relevant, to explicitly state what content and behaviour they deem to be acceptable on their sites and enforce this consistently and transparently. All companies in scope will need to ensure a higher level of protection for children, and take reasonable steps to protect them from inappropriate or harmful content,” it writes.

“Companies will be able to decide what type of legal content or behaviour is acceptable on their services, but must take reasonable steps to protect children from harm. They will need to set this out in clear and accessible terms and conditions and enforce these effectively, consistently and transparently. The proposed approach will improve transparency for users about which content is and is not acceptable on different platforms, and will enhance users’ ability to challenge removal of content where this occurs.”

Another requirement will be that companies have “effective and proportionate user redress mechanisms” — enabling users to report harmful content and challenge content takedown “where necessary”.

“This will give users clearer, more effective and more accessible avenues to question content takedown, which is an important safeguard for the right to freedom of expression,” the government suggests, adding that: “These processes will need to be transparent, in line with terms and conditions, and consistently applied.”

Ministers say they have not yet made a decision on what kind of liability senior management of covered businesses may face under the planned law, nor on additional business disruption measures — with the government saying it will set out its final policy position in the Spring.

“We recognise the importance of the regulator having a range of enforcement powers that it uses in a fair, proportionate and transparent way. It is equally essential that company executives are sufficiently incentivised to take online safety seriously and that the regulator can take action when they fail to do so,” it writes.

It’s also not clear how businesses will be assessed as being in (or out of) scope of the regulation.

“Just because a business has a social media page that does not bring it in scope of regulation,” the government response notes. “To be in scope, a business would have to operate its own website with the functionality to enable sharing of user-generated content, or user interactions. We will introduce this legislation proportionately, minimising the regulatory burden on small businesses. Most small businesses where there is a lower risk of harm occurring will not have to make disproportionately burdensome changes to their service to be compliant with the proposed regulation.”

The government is clear in the response that Online harms remains “a key legislative priority”.

“We have a comprehensive programme of work planned to ensure that we keep momentum until legislation is introduced as soon as parliamentary time allows,” it writes, describing today’s response report “an iterative step as we consider how best to approach this complex and important issue” — and adding: “We will continue to engage closely with industry and civil society as we finalise the remaining policy.”

Incoming in the meanwhile the government says it’s working on a package of measures “to ensure progress now on online safety” — including interim codes of practice, including guidance for companies on tackling terrorist and child sexual abuse and exploitation content online; an annual government transparency report, which it says it will publish “in the next few months”; and a media literacy strategy, to support public awareness of online security and privacy.

It adds that it expects social media platforms to “take action now to tackle harmful content or activity on their services” — ahead of the more formal requirements coming in.

Facebook-owned Instagram has come in for high level pressure from ministers over how it handles content promoting self-harm and suicide after the media picked up on a campaign by the family of a schoolgirl who killed herself after been exposed to Instagram content encouraging self-harm.

Instagram subsequently announced changes to its policies for handling content that encourages or depicts self harm/suicide — saying it would limit how it could be accessed. This later morphed into a ban on some of this content.

The government said today that companies offering online services that involve user generated content or user interactions are expected to make use of what it dubs “a proportionate range of tools” — including age assurance, and age verification technologies — to prevent kids from accessing age-inappropriate content and “protect them from other harms”.

This is also the piece of the planned legislation intended to pick up the baton of the Digital Economy Act’s porn block proposals — which the government dropped last year, saying it would bake equivalent measures into the forthcoming Online Harms legislation.

The Home Office has been consulting with social media companies on devising robust age verification technologies for many months.

In its own response statement today, Ofcom — which would be responsible for policy detail under the current proposals — said it will work with the government to ensure “any regulation provides effective protection for people online”, and, pending appointment, “consider what we can do before legislation is passed”.

The Online Harms plan is not the online Internet-related work ongoing in Whitehall, with ministers noting that: “Work on electoral integrity and related online transparency issues is being taken forward as part of the Defending Democracy programme together with the Cabinet Office.”

Back in 2018 a UK parliamentary committee called for a levy on social media platforms to fund digital literacy programs to combat online disinformation and defend democratic processes, during an enquiry into the use of social media for digital campaigning. However the UK government has been slower to act on this front.

The former chair of the DCMS committee, Damian Collins, called today for any future social media regulator to have “real powers in law” — including the ability to “investigate and apply sanctions to companies which fail to meet their obligations”.

In the DCMS committee’s final report parliamentarians called for Facebook’s business to be investigated, raising competition and privacy concerns.

Facebook data misuse and voter manipulation back in the frame with latest Cambridge Analytica leaks

More details are emerging about the scale and scope of disgraced data company Cambridge Analytica’s activities in elections around the world — via a cache of internal documents that’s being released by former employee and self-styled whistleblower, Brittany Kaiser.

The now shut down data modelling company, which infamously used stolen Facebook data to target voters for President Donald Trump’s campaign in the 2016 U.S. election, was at the center of the data misuse scandal that, in 2018, wiped billions off Facebook’s share price and contributed to a $5BN FTC fine for the tech giant last summer.

However plenty of questions remain, including where, for whom and exactly how Cambridge Analytica and its parent entity SCL Elections operated; as well as how much Facebook’s leadership knew about the dealings of the firm that was using its platform to extract data and target political ads — helped by some of Facebook’s own staff.

Certain Facebook employees were referring to Cambridge Analytica as a “sketchy” company as far back as September 2015 — yet the tech giant only pulled the plug on platform access after the scandal went global in 2018.

Facebook CEO Mark Zuckerberg has also continued to maintain that he only personally learned about CA from a December 2015 Guardian article, which broke the story that Ted Cruz’s presidential campaign was using psychological data based on research covering tens of millions of Facebook users, harvested largely without permission. (It wasn’t until March 2018 that further investigative journalism blew the lid off the story — turning it into a global scandal.)

Former Cambridge Analytica business development director Kaiser, who had a central role in last year’s Netflix documentary about the data misuse scandal (The Great Hack), began her latest data dump late last week — publishing links to scores of previously unreleased internal documents via a Twitter account called @HindsightFiles. (At the time of writing Twitter has placed a temporary limit on viewing the account — citing “unusual activity”, presumably as a result of the volume of downloads it’s attracting.)

Since becoming part of the public CA story Kaiser has been campaigning for Facebook to grant users property rights over their data. She claims she’s releasing new documents from her former employer now because she’s concerned this year’s US election remains at risk of the same type of big-data-enabled voter manipulation that tainted the 2016 result.

“I’m very fearful about what is going to happen in the US election later this year, and I think one of the few ways of protecting ourselves is to get as much information out there as possible,” she told The Guardian.

“Democracies around the world are being auctioned to the highest bidder,” is the tagline clam on the Twitter account Kaiser is using to distribute the previously unpublished documents — more than 100,000 of which are set to be released over the coming months, per the newspaper’s report.

The releases are being grouped into countries — with documents to-date covering Brazil, Kenya and Malaysia. There is also a themed release dealing with issues pertaining to Iran, and another covering CA/SCL’s work for Republican John Bolton’s Political Action Committee in the U.S.

The releases look set to underscore the global scale of CA/SCL’s social media-fuelled operations, with Kaiser writing that the previously unreleased emails, project plans, case studies and negotiations span at least 65 countries.

A spreadsheet of associate officers included in the current cache lists SCL associates in a large number of countries and regions including Australia, Argentina, the Balkans, India, Jordan, Lithuania, the Philippines, Switzerland and Turkey, among others. A second tab listing “potential” associates covers political and commercial contacts in various other places including Ukraine and even China.

A UK parliamentary committee which investigated online political campaigning and voter manipulation in 2018 — taking evidence from Kaiser and CA whistleblower Chris Wylie, among others — urged the government to audit the PR and strategic communications industry, warning in its final report how “easy it is for discredited companies to reinvent themselves and potentially use the same data and the same tactics to undermine governments, including in the UK”.

“Data analytics firms have played a key role in elections around the world. Strategic communications companies frequently run campaigns internationally, which are financed by less than transparent means and employ legally dubious methods,” the DCMS committee also concluded.

The committee’s final report highlighted election and referendum campaigns SCL Elections (and its myriad “associated companies”) had been involved in in around thirty countries. But per Kaiser’s telling its activities — and/or ambitions — appear to have been considerably broader and even global in scope.

Documents released to date include a case study of work that CA was contracted to carry out in the U.S. for Bolton’s Super PAC — where it undertook what is described as “a personality-targeted digital advertising campaign with three interlocking goals: to persuade voters to elect Republican Senate candidates in Arkansas, North Carolina and New Hampshire; to elevate national security as an issue of importance and to increase public awareness of Ambassador Bolton’s Super PAC”.

Here CA writes that it segmented “persuadable and low-turnout voter populations to identify several key groups that could be influenced by Bolton Super PAC messaging”, targeting them with online and Direct TV ads — designed to “appeal directly to specific groups’ personality traits, priority issues and demographics”. 

Psychographic profiling — derived from CA’s modelling of Facebook user data — was used to segment U.S. voters into targetable groups, including for serving microtargeted online ads. The company badged voters with personality-specific labels such as “highly neurotic” — targeting individuals with customized content designed to pray on their fears and/or hopes based on its analysis of voters’ personality traits.

The process of segmenting voters by personality and sentiment was made commercially possible by access to identity-linked personal data — which puts Facebook’s population-scale collation of identities and individual-level personal data squarely in the frame.

It was a cache of tens of millions of Facebook profiles, along with responses to a personality quiz app linked to Facebook accounts, which was sold to Cambridge Analytica in 2014, by a company called GSR, and used to underpin its psychographic profiling of U.S. voters.

In evidence to the DCMS committee last year GSR’s co-founder, Aleksandr Kogan, argued that Facebook did not have a “valid” developer policy at the time, since he said the company did nothing to enforce the stated T&Cs — meaning users’ data was wide open to misappropriation and exploitation.

The UK’s data protection watchdog also took a dim view. In 2018 it issued Facebook with the maximum fine possible, under relevant national law, for the CA data breach — and warned in a report that democracy is under threat. The country’s information commissioner also called for an “ethical pause” of the use of online microtargeting ad tools for political campaigning.

No such pause has taken place.

Meanwhile for its part, since the Cambridge Analytica scandal snowballed into global condemnation of its business, Facebook has made loud claims to be ‘locking down’ its platform — including saying it would conduct an app audit and “investigate all apps that had access to large amounts of information”; “conduct a full audit of any app with suspicious activity”; and “ban any developer from our platform that does not agree to a thorough audit”.

However, close to two years later, there’s still no final report from the company on the upshot of this self ‘audit’.

And while Facebook was slapped with a headline-grabbing FTC fine on home soil, there was in fact no proper investigation; no requirement for it to change its privacy-hostile practices; and blanket immunity for top execs — even for any unknown data violations in the 2012 to 2018 period. So, ummm

In another highly curious detail, GSR’s other co-founder, a data scientist called Joseph Chancellor, was in fact hired by Facebook in late 2015. The tech giant has never satisfactorily explained how it came to recruit one of the two individuals at the center of a voter manipulation data misuse scandal which continues to wreak hefty reputational damage on Zuckerberg and his platform. But being able to ensure Chancellor was kept away from the press during a period of intense scrutiny looks pretty convenient.

Last fall, the GSR co-founder was reported to have left Facebook — as quietly, and with as little explanation given, as when he arrived on the tech giant’s payroll.

So Kaiser seems quite right to be concerned that the data industrial complex will do anything to keep its secrets — given it’s designed and engineered to sell access to yours. Even as she has her own reasons to want to keep the story in the media spotlight.

Platforms whose profiteering purpose is to track and target people at global scale — which function by leveraging an asymmetrical ‘attention economy’ — have zero incentive to change or have change imposed upon them. Not when the propaganda-as-a-service business remains in such high demand, whether for selling actual things like bars of soap, or for hawking ideas with a far darker purpose.

British parliament presses Facebook on letting politicians lie in ads

In yet another letter seeking to pry accountability from Facebook, the chair of a British parliamentary committee has pressed the company over its decision to adopt a policy on political ad that supports flagrant lying.

In the letter Damian Collins, chair of the DCMS committee, asks the company to explain why it recently took the decision to change its policy regarding political ads — “given the heavy constraint this will place on Facebook’s ability to combat online disinformation in the run-up to elections around the world”.

“The change in policy will absolve Facebook from the responsibility of identifying and tackling the widespread content of bad actors, such as Russia’s Internet Research Agency,” he warns, before going on to cite a recent tweet by the former chief of Facebook’s global efforts around political ads transparency and election integrity  who has claimed that senior management ignored calls from lower down for ads to be scanned for misinformation.

“I also note that Facebook’s former head of global elections integrity ops, Yael Eisenstat, has described that when she advocated for the scanning of adverts to detect misinformation efforts, despite engineers’ enthusiasm she faced opposition from upper management,” writes Collins.

 

In a further question, Collins asks what specific proposals Eisenstat’s team made; to what extent Facebook determined them to be feasible; and on what grounds were they not progressed.

He also asks what plans Facebook has to formalize a working relationship with fact-checkers over the long run.

A Facebook spokesperson declined to comment on the DCMS letter, saying the company would respond in due course.

In a naked display of its platform’s power and political muscle, Facebook deployed a former politician to endorse its ‘fake ads are fine’ position last month — when head of global policy and communication, Nick Clegg, who used to be the deputy prime minister of the UK, said: ” We do not submit speech by politicians to our independent fact-checkers, and we generally allow it on the platform even when it would otherwise breach our normal content rules.”

So, in other words, if you’re a politician you get a green light to run lying ads on Facebook.

Clegg was giving a speech on the company’s plans to prevent interference in the 2020 US presidential election. The only line he said Facebook would be willing to draw was if a politician’s speech “can lead to real world violence and harm”. But from a company that abjectly failed to prevent its platform from being misappropriated to accelerate genocide in Myanmar that’s the opposite of reassuring.

“At Facebook, our role is to make sure there is a level playing field, not to be a political participant ourselves,” said Clegg. “We have a responsibility to protect the platform from outside interference, and to make sure that when people pay us for political ads we make it as transparent as possible. But it is not our role to intervene when politicians speak.”

In truth Facebook roundly fails to protect its platform from outside interference too. Inauthentic behavior and fake content is a ceaseless firefight that Facebook is nowhere close to being on top of, let alone winning. But on political ads it’s not even going to try — giving politicians around the world carte blanche to use outrage-fuelling disinformation and racist dogwhistles as a low budget, broad reach campaign strategy.

We’ve seen this before on Facebook of course, during the UK’s Brexit referendum — when scores of dark ads sought to whip up anti-immigrant sentiment and drive a wedge between voters and the European Union.

And indeed Collins’ crusade against Facebook as a conduit for disinformation began in the wake of that 2016 EU referendum.

Since then the company has faced major political scrutiny over how it accelerates disinformation — and has responded by creating a degree of transparency on political ads, launching an archive where this type of advert can be searched. But that appears as far as Facebook is willing to go on tackling the malicious propaganda problem its platform accelerates.

In the US, senator Elizabeth Warren has been duking it out publicly with Facebook on the same point as Collins rather more directly — by running ads on Facebook saying it’s endorsing Trump by supporting his lies.

There’s no sign of Facebook backing down, though. On the contrary. A recent leak from an internal meeting saw founder Mark Zuckerberg attacking Warren as an “existential” threat to the company. While, this week, Bloomberg reports that Facebook’s executive has been quietly advising a Warren rival for the Democratic nomination, Pete Buttigieg, on campaign hires.

So a company that hires politicians to senior roles, advises high profile politicians on election campaigns, tweaks its policy on political ads after a closed door meeting with the current holder of the office of US president, Donald Trump, and ignores internal calls to robustly police political ads, is rapidly sloughing off any residual claims to be ‘just a technology company’. (Though, really, we knew that already.)

In the letter Collins also presses Facebook on its plan to rollout end-to-end encryption across its messaging app suite, asking why it can’t limit the tech to WhatsApp only — something the UK government has also been pressing it on this month.

He also raises questions about Facebook’s access to metadata — asking whether it will use inferences gleaned from the who, when and where of e2e encrypted comms (even though it can’t access the what) to target users with ads.

Facebook’s self-proclaimed ‘pivot to privacy‘ — when it announced earlier this year a plan to unify its separate messaging platforms onto a single e2e encrypted backend — has been widely interpreted as an attempt to make it harder for antitrust regulators to break up its business empire, as well as a strategy to shirk responsibility for content moderation by shielding itself from much of the substance that flows across its platform while retaining access to richer cross-platform metadata so it can continue to target users with ads…