Archives

online disinformation

YouTube sellers found touting bogus coronavirus vaccines and masks

YouTube has been criticized for continuing to host coronavirus disinformation on its video sharing platform during a global health emergency.

Two US advocacy groups which campaign for online safety undertook an 18-day investigation of the video sharing platform in March — finding what they say were “dozens” of examples of dubious videos, including videos touting bogus vaccines the sellers claimed would protect buyers from COVID-19.

They also found videos advertising medical masks of unknown quality for sale.

There have been concerns about shortages of masks for front-line medical staff, as well as the risk of online scammers hawking low grade kit that does not offered the claimed protection against the virus.

Google said last month that it would temporarily take down ads for masks from its ad network but sellers looking to exploit the coronavirus crisis appear to be circumventing the ban by using YouTube’s video sharing platform as an alternative digital shop window to lure buyers.

Researchers working for the Digital Citizens Alliance (DCA) and the Coalition for a Safer Web (CSW) initiated conversations with sellers they found touting dodgy coronavirus wares on YouTube — and were offered useless ‘vaccines’ for purchase and hundreds of masks of unknown quality.

“There was ample reason to believe the offers for masks were dubious as well [as the vaccines], as highlighted by interactions with representatives from some of the sellers,” they said.

Their report includes screengrabs of some of the interactions with the sellers. In one a seller tells the researchers they don’t accept credit cards — but they do accept CashApp, PayPal, Google or Amazon gift cards or Bitcoin.

The same seller offered the researchers vaccines priced at $135 each, and suggested they purchase MMR/Varicella when asked which one is “the best”. Such a vaccine, even if it functioned for MMR/Varicella, would obviously offer no protection against COVID-19.

Another seller was found to be hawking “COVID-19 drugs” using a YouTube account name “Real ID Card Fake Passport Producer”.

“How does a guy calling himself ‘Real ID Card Fake Passport Producer’ even get a page on YouTube?” said Eric Feinberg, lead researcher for CSW, in a statement accompanying the report. “It’s all too easy to get ahold of these guys. We called some of them. Once you contact them, they are relentless. They’ll call you back at all hours and hound you until you buy something. They’ll call you in the middle of the night. They are predators looking to capitalize on our fear.”

A spokesman for the DCA told us the researchers compiled the report based on content from around 60 videos they identified hawking coronavirus-related ‘cures’ or kit between March 6-24.

“There are too many to count. Everyday, I find more,” added Feinberg.

The groups are also critical of how YouTube’s platform risks lending credibility to coronavirus disinformation because the platform now displays official CDC-branded banners under any COVID-19 related material — including the dubious videos their report highlights.

“YouTube also mixes trusted resources with sites that shouldn’t be trusted and that could confuse consumers — especially when they are scared and desperate,” said DCA executive director, Tom Galvin, in a statement. “It’s hard enough to tell who’s legitimate and who’s not on YouTube.”

The DCA and CSW have written letters to the US Department of Justice and the Federal Trade Commission laying out their findings and calling for “swift action” to hold bad actors accountable.

YouTube, and its parent company Google, are shirking their formal policy that prohibits content that capitalizes off sensitive events,” they write in a letter to attorney general Barr.

“Digital Citizens is sharing this information in the hopes your Justice Department will act swiftly to hold bad actors, who take advantage of the coronavirus, accountable. In this crisis, strong action will deter others from engaging in criminal or illicit acts that harm consumers or add to confusion and anxiety,” they add.

Responding to the groups’ findings a YouTube spokesperson said some of the videos the researchers had identified had not received many views.

After we contacted it about the content YouTube also said it had removed three channels that had been identified by the researchers in the report for violating Community Guidelines.

In a statement YouTube added:

Our thoughts are with everyone affected by the coronavirus around the world. We’re committed to providing helpful information at this critical time, including raising authoritative content, reducing the spread of harmful misinformation and showing information panels, using WHO / CDC data, to help combat misinformation. To date, there have been over 5B impressions on our information panels for coronavirus related videos and searches. We also have clear policies against COVID-19 misinformation and we quickly remove videos violating these policies when flagged to us.

The DCA and CSW also recently undertook a similar review of Facebook’s platform — finding sellers touting masks for sale despite the tech giant’s claimed ban on such content. “Facebook promised CNN when they did a story on our report about them that the masks would be gone a week ago, but the researchers from CSW are still finding the masks now,” their spokesman told us.

Earlier this week the Tech Transparency Project also reported still being able to find masks for sale on Facebook’s platform. It found examples of masks showing up in Google’s targeted ads too.

Facebook pushes EU for dilute and fuzzy Internet content rules

Facebook founder Mark Zuckerberg is in Europe this week — attending a security conference in Germany over the weekend where he spoke about the kind of regulation he’d like applied to his platform ahead of a slate of planned meetings with digital heavyweights at the European Commission.

“I do think that there should be regulation on harmful content,” said Zuckerberg during a Q&A session at the Munich Security Conference, per Reuters, making a pitch for bespoke regulation.

He went on to suggest “there’s a question about which framework you use”, telling delegates: “Right now there are two frameworks that I think people have for existing industries — there’s like newspapers and existing media, and then there’s the telco-type model, which is ‘the data just flows through you’, but you’re not going to hold a telco responsible if someone says something harmful on a phone line.”

“I actually think where we should be is somewhere in between,” he added, making his plea for Internet platforms to be a special case.

At the conference he also said Facebook now employs 35,000 people to review content on its platform and implement security measures — including suspending around 1 million fake accounts per day, a stat he professed himself “proud” of.

The Facebook chief is due to meet with key commissioners covering the digital sphere this week, including competition chief and digital EVP Margrethe Vestager, internal market commissioner Thierry Breton and Věra Jourová, who is leading policymaking around online disinformation.

The timing of his trip is clearly linked to digital policymaking in Brussels — with the Commission due to set out its thinking around the regulation of artificial intelligence this week. (A leaked draft last month suggested policymaker are eyeing risk-based rules to wrap around AI.)

More widely, the Commission is wrestling with how to respond to a range of problematic online content — from terrorism to disinformation and election interference — which also puts Facebook’s 2BN+ social media empire squarely in regulators’ sights.

Another policymaking plan — a forthcoming Digital Service Act (DSA) — is slated to upgrade liability rules around Internet platforms.

The detail of the DSA has yet to be publicly laid out but any move to rethink platform liabilities could present a disruptive risk for a content distributing giant such as Facebook.

Going into meetings with key commissioners Zuckerberg made his preference for being considered a ‘special’ case clear — saying he wants his platform to be regulated not like the media businesses which his empire has financially disrupted; nor like a dumbpipe telco.

On the latter it’s clear — even to Facebook — that the days of Zuckerberg being able to trot out his erstwhile mantra that ‘we’re just a technology platform’, and wash his hands of tricky content stuff, are long gone.

Russia’s 2016 foray into digital campaigning in the US elections and sundry content horrors/scandals before and since have put paid to that — from nation-state backed fake news campaigns to livestreamed suicides and mass murder.

Facebook has been forced to increase its investment in content moderation. Meanwhile it announced a News section launch last year — saying it would hand pick publishers content to show in a dedicated tab.

The ‘we’re just a platform’ line hasn’t been working for years. And EU policymakers are preparing to do something about that.

With regulation looming Facebook is now directing its lobbying energies onto trying to shape a policymaking debate — calling for what it dubs “the ‘right’ regulation”.

Here the Facebook chief looks to be applying a similar playbook as the Google’s CEO, Sundar Pichai — who recently tripped to Brussels to push for AI rules so dilute they’d act as a tech enabler.

In a blog post published today Facebook pulls its latest policy lever: Putting out a white paper which poses a series of questions intended to frame the debate at a key moment of public discussion around digital policymaking.

Top of this list is a push to foreground focus on free speech, with Facebook questioning “how can content regulation best achieve the goal of reducing harmful speech while preserving free expression?” — before suggesting more of the same: (Free, to its business) user-generated policing of its platform.

Another suggestion it sets out which aligns with existing Facebook moves to steer regulation in a direction it’s comfortable with is for an appeals channel to be created for users to appeal content removal or non-removal. Which of course entirely aligns with a content decision review body Facebook is in the process of setting up — but which is not in fact independent of Facebook.

Facebook is also lobbying in the white paper to be able to throw platform levers to meet a threshold of ‘acceptable vileness’ — i.e. it wants a proportion of law-violating content to be sanctioned by regulators — with the tech giant suggesting: “Companies could be incentivized to meet specific targets such as keeping the prevalence of violating content below some agreed threshold.”

It’s also pushing for the fuzziest and most dilute definition of “harmful content” possible. On this Facebook argues that existing (national) speech laws — such as, presumably, Germany’s Network Enforcement Act (aka the NetzDG law) which already covers online hate speech in that market — should not apply to Internet content platforms, as it claims moderating this type of content is “fundamentally different”.

“Governments should create rules to address this complexity — that recognize user preferences and the variation among internet services, can be enforced at scale, and allow for flexibility across language, trends and context,” it writes — lobbying for maximum possible leeway to be baked into the coming rules.

“The development of regulatory solutions should involve not just lawmakers, private companies and civil society, but also those who use online platforms,” Facebook’s VP of content policy, Monika Bickert, also writes in the blog.

“If designed well, new frameworks for regulating harmful content can contribute to the internet’s continued success by articulating clear ways for government, companies, and civil society to share responsibilities and work together. Designed poorly, these efforts risk unintended consequences that might make people less safe online, stifle expression and slow innovation,” she adds, ticking off more of the tech giant’s usual talking points at the point policymakers start discussing putting hard limits on its ad business.

UK names its pick for social media ‘harms’ watchdog

The UK government has taken the next step in its grand policymaking challenge to tame the worst excesses of social media by regulating a broad range of online harms — naming the existing communications watchdog, Ofcom, as its preferred pick for enforcing rules around ‘harmful speech’ on platforms such as Facebook, Snapchat and TikTok in future.

Last April the previous Conservative-led government laid out populist but controversial proposals to legislate to lay a duty of care on Internet platforms — responding to growing public concern about the types of content kids are being exposed to online.

Its white paper covers a broad range of online content — from terrorism, violence and hate speech, to child exploitation, self-harm/suicide, cyber bullying, disinformation and age-inappropriate material — with the government setting out a plan to require platforms to take “reasonable” steps to protect their users from a range of harms.

However digital and civil rights campaigners warn the plan will have a huge impact on online speech and privacy, arguing it will put a legal requirement on platforms to closely monitor all users and apply speech-chilling filtering technologies on uploads in order to comply with very broadly defined concepts of harm — dubbing it state censorship. Legal experts are also critical.

The (now) Conservative majority government has nonetheless said it remains committed to the legislation.

Today it responded to some of the concerns being raised about the plan’s impact on freedom of expression, publishing a partial response to the public consultation on the Online Harms White Paper, although a draft bill remains pending, with no timeline confirmed.

“Safeguards for freedom of expression have been built in throughout the framework,” the government writes in an executive summary. “Rather than requiring the removal of specific pieces of legal content, regulation will focus on the wider systems and processes that platforms have in place to deal with online harms, while maintaining a proportionate and risk-based approach.”

It says it’s planning to set a different bar for content deemed illegal vs content that has “potential to cause harm”, with the heaviest content removal requirements being planned for terrorist and child sexual exploitation content. Whereas companies will not be forced to remove “specific pieces of legal content”, as the government puts it.

Ofcom, as the online harms regulator, will also not be investigating or adjudicating on “individual complaints”.

“The new regulatory framework will instead require companies, where relevant, to explicitly state what content and behaviour they deem to be acceptable on their sites and enforce this consistently and transparently. All companies in scope will need to ensure a higher level of protection for children, and take reasonable steps to protect them from inappropriate or harmful content,” it writes.

“Companies will be able to decide what type of legal content or behaviour is acceptable on their services, but must take reasonable steps to protect children from harm. They will need to set this out in clear and accessible terms and conditions and enforce these effectively, consistently and transparently. The proposed approach will improve transparency for users about which content is and is not acceptable on different platforms, and will enhance users’ ability to challenge removal of content where this occurs.”

Another requirement will be that companies have “effective and proportionate user redress mechanisms” — enabling users to report harmful content and challenge content takedown “where necessary”.

“This will give users clearer, more effective and more accessible avenues to question content takedown, which is an important safeguard for the right to freedom of expression,” the government suggests, adding that: “These processes will need to be transparent, in line with terms and conditions, and consistently applied.”

Ministers say they have not yet made a decision on what kind of liability senior management of covered businesses may face under the planned law, nor on additional business disruption measures — with the government saying it will set out its final policy position in the Spring.

“We recognise the importance of the regulator having a range of enforcement powers that it uses in a fair, proportionate and transparent way. It is equally essential that company executives are sufficiently incentivised to take online safety seriously and that the regulator can take action when they fail to do so,” it writes.

It’s also not clear how businesses will be assessed as being in (or out of) scope of the regulation.

“Just because a business has a social media page that does not bring it in scope of regulation,” the government response notes. “To be in scope, a business would have to operate its own website with the functionality to enable sharing of user-generated content, or user interactions. We will introduce this legislation proportionately, minimising the regulatory burden on small businesses. Most small businesses where there is a lower risk of harm occurring will not have to make disproportionately burdensome changes to their service to be compliant with the proposed regulation.”

The government is clear in the response that Online harms remains “a key legislative priority”.

“We have a comprehensive programme of work planned to ensure that we keep momentum until legislation is introduced as soon as parliamentary time allows,” it writes, describing today’s response report “an iterative step as we consider how best to approach this complex and important issue” — and adding: “We will continue to engage closely with industry and civil society as we finalise the remaining policy.”

Incoming in the meanwhile the government says it’s working on a package of measures “to ensure progress now on online safety” — including interim codes of practice, including guidance for companies on tackling terrorist and child sexual abuse and exploitation content online; an annual government transparency report, which it says it will publish “in the next few months”; and a media literacy strategy, to support public awareness of online security and privacy.

It adds that it expects social media platforms to “take action now to tackle harmful content or activity on their services” — ahead of the more formal requirements coming in.

Facebook-owned Instagram has come in for high level pressure from ministers over how it handles content promoting self-harm and suicide after the media picked up on a campaign by the family of a schoolgirl who killed herself after been exposed to Instagram content encouraging self-harm.

Instagram subsequently announced changes to its policies for handling content that encourages or depicts self harm/suicide — saying it would limit how it could be accessed. This later morphed into a ban on some of this content.

The government said today that companies offering online services that involve user generated content or user interactions are expected to make use of what it dubs “a proportionate range of tools” — including age assurance, and age verification technologies — to prevent kids from accessing age-inappropriate content and “protect them from other harms”.

This is also the piece of the planned legislation intended to pick up the baton of the Digital Economy Act’s porn block proposals — which the government dropped last year, saying it would bake equivalent measures into the forthcoming Online Harms legislation.

The Home Office has been consulting with social media companies on devising robust age verification technologies for many months.

In its own response statement today, Ofcom — which would be responsible for policy detail under the current proposals — said it will work with the government to ensure “any regulation provides effective protection for people online”, and, pending appointment, “consider what we can do before legislation is passed”.

The Online Harms plan is not the online Internet-related work ongoing in Whitehall, with ministers noting that: “Work on electoral integrity and related online transparency issues is being taken forward as part of the Defending Democracy programme together with the Cabinet Office.”

Back in 2018 a UK parliamentary committee called for a levy on social media platforms to fund digital literacy programs to combat online disinformation and defend democratic processes, during an enquiry into the use of social media for digital campaigning. However the UK government has been slower to act on this front.

The former chair of the DCMS committee, Damian Collins, called today for any future social media regulator to have “real powers in law” — including the ability to “investigate and apply sanctions to companies which fail to meet their obligations”.

In the DCMS committee’s final report parliamentarians called for Facebook’s business to be investigated, raising competition and privacy concerns.