Archives

online safety

YouTube sellers found touting bogus coronavirus vaccines and masks

YouTube has been criticized for continuing to host coronavirus disinformation on its video sharing platform during a global health emergency.

Two US advocacy groups which campaign for online safety undertook an 18-day investigation of the video sharing platform in March — finding what they say were “dozens” of examples of dubious videos, including videos touting bogus vaccines the sellers claimed would protect buyers from COVID-19.

They also found videos advertising medical masks of unknown quality for sale.

There have been concerns about shortages of masks for front-line medical staff, as well as the risk of online scammers hawking low grade kit that does not offered the claimed protection against the virus.

Google said last month that it would temporarily take down ads for masks from its ad network but sellers looking to exploit the coronavirus crisis appear to be circumventing the ban by using YouTube’s video sharing platform as an alternative digital shop window to lure buyers.

Researchers working for the Digital Citizens Alliance (DCA) and the Coalition for a Safer Web (CSW) initiated conversations with sellers they found touting dodgy coronavirus wares on YouTube — and were offered useless ‘vaccines’ for purchase and hundreds of masks of unknown quality.

“There was ample reason to believe the offers for masks were dubious as well [as the vaccines], as highlighted by interactions with representatives from some of the sellers,” they said.

Their report includes screengrabs of some of the interactions with the sellers. In one a seller tells the researchers they don’t accept credit cards — but they do accept CashApp, PayPal, Google or Amazon gift cards or Bitcoin.

The same seller offered the researchers vaccines priced at $135 each, and suggested they purchase MMR/Varicella when asked which one is “the best”. Such a vaccine, even if it functioned for MMR/Varicella, would obviously offer no protection against COVID-19.

Another seller was found to be hawking “COVID-19 drugs” using a YouTube account name “Real ID Card Fake Passport Producer”.

“How does a guy calling himself ‘Real ID Card Fake Passport Producer’ even get a page on YouTube?” said Eric Feinberg, lead researcher for CSW, in a statement accompanying the report. “It’s all too easy to get ahold of these guys. We called some of them. Once you contact them, they are relentless. They’ll call you back at all hours and hound you until you buy something. They’ll call you in the middle of the night. They are predators looking to capitalize on our fear.”

A spokesman for the DCA told us the researchers compiled the report based on content from around 60 videos they identified hawking coronavirus-related ‘cures’ or kit between March 6-24.

“There are too many to count. Everyday, I find more,” added Feinberg.

The groups are also critical of how YouTube’s platform risks lending credibility to coronavirus disinformation because the platform now displays official CDC-branded banners under any COVID-19 related material — including the dubious videos their report highlights.

“YouTube also mixes trusted resources with sites that shouldn’t be trusted and that could confuse consumers — especially when they are scared and desperate,” said DCA executive director, Tom Galvin, in a statement. “It’s hard enough to tell who’s legitimate and who’s not on YouTube.”

The DCA and CSW have written letters to the US Department of Justice and the Federal Trade Commission laying out their findings and calling for “swift action” to hold bad actors accountable.

YouTube, and its parent company Google, are shirking their formal policy that prohibits content that capitalizes off sensitive events,” they write in a letter to attorney general Barr.

“Digital Citizens is sharing this information in the hopes your Justice Department will act swiftly to hold bad actors, who take advantage of the coronavirus, accountable. In this crisis, strong action will deter others from engaging in criminal or illicit acts that harm consumers or add to confusion and anxiety,” they add.

Responding to the groups’ findings a YouTube spokesperson said some of the videos the researchers had identified had not received many views.

After we contacted it about the content YouTube also said it had removed three channels that had been identified by the researchers in the report for violating Community Guidelines.

In a statement YouTube added:

Our thoughts are with everyone affected by the coronavirus around the world. We’re committed to providing helpful information at this critical time, including raising authoritative content, reducing the spread of harmful misinformation and showing information panels, using WHO / CDC data, to help combat misinformation. To date, there have been over 5B impressions on our information panels for coronavirus related videos and searches. We also have clear policies against COVID-19 misinformation and we quickly remove videos violating these policies when flagged to us.

The DCA and CSW also recently undertook a similar review of Facebook’s platform — finding sellers touting masks for sale despite the tech giant’s claimed ban on such content. “Facebook promised CNN when they did a story on our report about them that the masks would be gone a week ago, but the researchers from CSW are still finding the masks now,” their spokesman told us.

Earlier this week the Tech Transparency Project also reported still being able to find masks for sale on Facebook’s platform. It found examples of masks showing up in Google’s targeted ads too.

UK names its pick for social media ‘harms’ watchdog

The UK government has taken the next step in its grand policymaking challenge to tame the worst excesses of social media by regulating a broad range of online harms — naming the existing communications watchdog, Ofcom, as its preferred pick for enforcing rules around ‘harmful speech’ on platforms such as Facebook, Snapchat and TikTok in future.

Last April the previous Conservative-led government laid out populist but controversial proposals to legislate to lay a duty of care on Internet platforms — responding to growing public concern about the types of content kids are being exposed to online.

Its white paper covers a broad range of online content — from terrorism, violence and hate speech, to child exploitation, self-harm/suicide, cyber bullying, disinformation and age-inappropriate material — with the government setting out a plan to require platforms to take “reasonable” steps to protect their users from a range of harms.

However digital and civil rights campaigners warn the plan will have a huge impact on online speech and privacy, arguing it will put a legal requirement on platforms to closely monitor all users and apply speech-chilling filtering technologies on uploads in order to comply with very broadly defined concepts of harm — dubbing it state censorship. Legal experts are also critical.

The (now) Conservative majority government has nonetheless said it remains committed to the legislation.

Today it responded to some of the concerns being raised about the plan’s impact on freedom of expression, publishing a partial response to the public consultation on the Online Harms White Paper, although a draft bill remains pending, with no timeline confirmed.

“Safeguards for freedom of expression have been built in throughout the framework,” the government writes in an executive summary. “Rather than requiring the removal of specific pieces of legal content, regulation will focus on the wider systems and processes that platforms have in place to deal with online harms, while maintaining a proportionate and risk-based approach.”

It says it’s planning to set a different bar for content deemed illegal vs content that has “potential to cause harm”, with the heaviest content removal requirements being planned for terrorist and child sexual exploitation content. Whereas companies will not be forced to remove “specific pieces of legal content”, as the government puts it.

Ofcom, as the online harms regulator, will also not be investigating or adjudicating on “individual complaints”.

“The new regulatory framework will instead require companies, where relevant, to explicitly state what content and behaviour they deem to be acceptable on their sites and enforce this consistently and transparently. All companies in scope will need to ensure a higher level of protection for children, and take reasonable steps to protect them from inappropriate or harmful content,” it writes.

“Companies will be able to decide what type of legal content or behaviour is acceptable on their services, but must take reasonable steps to protect children from harm. They will need to set this out in clear and accessible terms and conditions and enforce these effectively, consistently and transparently. The proposed approach will improve transparency for users about which content is and is not acceptable on different platforms, and will enhance users’ ability to challenge removal of content where this occurs.”

Another requirement will be that companies have “effective and proportionate user redress mechanisms” — enabling users to report harmful content and challenge content takedown “where necessary”.

“This will give users clearer, more effective and more accessible avenues to question content takedown, which is an important safeguard for the right to freedom of expression,” the government suggests, adding that: “These processes will need to be transparent, in line with terms and conditions, and consistently applied.”

Ministers say they have not yet made a decision on what kind of liability senior management of covered businesses may face under the planned law, nor on additional business disruption measures — with the government saying it will set out its final policy position in the Spring.

“We recognise the importance of the regulator having a range of enforcement powers that it uses in a fair, proportionate and transparent way. It is equally essential that company executives are sufficiently incentivised to take online safety seriously and that the regulator can take action when they fail to do so,” it writes.

It’s also not clear how businesses will be assessed as being in (or out of) scope of the regulation.

“Just because a business has a social media page that does not bring it in scope of regulation,” the government response notes. “To be in scope, a business would have to operate its own website with the functionality to enable sharing of user-generated content, or user interactions. We will introduce this legislation proportionately, minimising the regulatory burden on small businesses. Most small businesses where there is a lower risk of harm occurring will not have to make disproportionately burdensome changes to their service to be compliant with the proposed regulation.”

The government is clear in the response that Online harms remains “a key legislative priority”.

“We have a comprehensive programme of work planned to ensure that we keep momentum until legislation is introduced as soon as parliamentary time allows,” it writes, describing today’s response report “an iterative step as we consider how best to approach this complex and important issue” — and adding: “We will continue to engage closely with industry and civil society as we finalise the remaining policy.”

Incoming in the meanwhile the government says it’s working on a package of measures “to ensure progress now on online safety” — including interim codes of practice, including guidance for companies on tackling terrorist and child sexual abuse and exploitation content online; an annual government transparency report, which it says it will publish “in the next few months”; and a media literacy strategy, to support public awareness of online security and privacy.

It adds that it expects social media platforms to “take action now to tackle harmful content or activity on their services” — ahead of the more formal requirements coming in.

Facebook-owned Instagram has come in for high level pressure from ministers over how it handles content promoting self-harm and suicide after the media picked up on a campaign by the family of a schoolgirl who killed herself after been exposed to Instagram content encouraging self-harm.

Instagram subsequently announced changes to its policies for handling content that encourages or depicts self harm/suicide — saying it would limit how it could be accessed. This later morphed into a ban on some of this content.

The government said today that companies offering online services that involve user generated content or user interactions are expected to make use of what it dubs “a proportionate range of tools” — including age assurance, and age verification technologies — to prevent kids from accessing age-inappropriate content and “protect them from other harms”.

This is also the piece of the planned legislation intended to pick up the baton of the Digital Economy Act’s porn block proposals — which the government dropped last year, saying it would bake equivalent measures into the forthcoming Online Harms legislation.

The Home Office has been consulting with social media companies on devising robust age verification technologies for many months.

In its own response statement today, Ofcom — which would be responsible for policy detail under the current proposals — said it will work with the government to ensure “any regulation provides effective protection for people online”, and, pending appointment, “consider what we can do before legislation is passed”.

The Online Harms plan is not the online Internet-related work ongoing in Whitehall, with ministers noting that: “Work on electoral integrity and related online transparency issues is being taken forward as part of the Defending Democracy programme together with the Cabinet Office.”

Back in 2018 a UK parliamentary committee called for a levy on social media platforms to fund digital literacy programs to combat online disinformation and defend democratic processes, during an enquiry into the use of social media for digital campaigning. However the UK government has been slower to act on this front.

The former chair of the DCMS committee, Damian Collins, called today for any future social media regulator to have “real powers in law” — including the ability to “investigate and apply sanctions to companies which fail to meet their obligations”.

In the DCMS committee’s final report parliamentarians called for Facebook’s business to be investigated, raising competition and privacy concerns.

Facebook fails to keep Messenger Kids’ safety promise

Facebook’s messaging app for under 13s, Messenger Kids — which launched two years ago pledging a “private” chat space for kids to talk with contacts specifically approved by their parents — has run into an embarrassing safety issue.

The Verge obtained messages sent by Facebook to an unknown number of parents of users of the app informing them the company had found what it couches as “a technical error” which allowed a friend of a child to create a group chat with them in the app which invited one or more of the second child’s parent-approved friends — i.e. without those secondary contacts having been approved by the parent of the first child.

Facebook did not make a public disclosure of the safety issue. We’ve reached out to the company with questions.

It earlier confirmed the bug to the Verge, telling it: “We recently notified some parents of Messenger Kids account users about a technical error that we detected affecting a small number of group chats. We turned off the affected chats and provided parents with additional resources on Messenger Kids and online safety.”

The issue appears to have arisen as a result of how Messenger Kids’ permissions are applied in group chat scenarios — where the multi-user chats apparently override the system of required parental approval for contacts who kids are chatting with one on one.

But given the app’s support for group messaging it’s pretty incredible that Facebook engineers failed to robustly enforce an additional layer of checks for friends of friends to avoid unapproved users (who could include adults) from being able to connect and chat with children.

The Verge reports that “thousands” of children were left in chats with unauthorized users as a result of the flaw.

Despite its long history of playing fast and loose with user privacy, at the launch of Messenger Kids in 2017 the then head of Facebook Messenger, David Marcus, was quick to throw shade at other apps kids might use to chat — saying: “In other apps, they can contact anyone they want or be contacted by anyone.”

Turns out Facebook’s Messenger Kids has also allowed unapproved users into chatrooms it claimed as safe spaces for kids, saying too that it had developed the app in “lockstep” with the FTC.

We’ve reached out to the FTC to ask if it will be investigating the safety breach.

Friends’ data has been something of a recurring privacy blackhole for Facebook — enabling, for example, the misuse of millions of users’ personal information without their knowledge or consent as a result of the expansive permissions Facebook wrapped around it, when the now defunct political data company, Cambridge Analytica, paid a developer to harvest Facebook data to build psychographic profiles of US voters.

The company is reportedly on the verge of being issued with a $5BN penalty by the FTC related to an investigation of whether it breached earlier privacy commitments made to the regulator.

Various data protection laws govern apps that process children’s data, including the Children’s Online Privacy Protection Act (Coppa) in the US and the General Data Protection Regulation in Europe. But while there are potential privacy issues here with the Messenger Kids flaw, given children’s data may have been shared with unauthorized third parties as a result of the “error”, the main issue of concern for parents is likely the safety risk of their children being exposed to people they have not authorized in an unsupervised video chat environment.

On that issue current laws have less of a support framework to offer.

Although — in Europe — rising concern about a range of risks and harms kids can face when going online has led the UK government to seek to regulate the area.

recently published white paper sets out its plan to regulate a broad range of online harms, including proposing a mandatory duty of care on platforms to take reasonable steps to protect users from a range of harms, such as child sexual exploitation.