Archives

Human Rights Watch

Blackbox welfare fraud detection system breaches human rights, Dutch court rules

An algorithmic risk scoring system deployed by the Dutch state to try to predict the likelihood that social security claimants will commit benefits or tax fraud breaches human rights law, a court in the Netherlands has ruled.

The Dutch government’s System Risk Indication (SyRI) legislation uses a non-disclosed algorithmic risk model to profile citizens and has been exclusively targeted at neighborhoods with mostly low-income and minority residents. Human rights campaigners have dubbed it a ‘welfare surveillance state’.

A number of civil society organizations in the Netherlands and two citizens instigated the legal action against SyRI — seeking to block its use. The court has today ordered an immediate halt to the use of the system.

The ruling is being hailed as a landmark judgement by human rights campaigners, with the court basing its reasoning on European human rights law — specifically the right to a private life that’s set out by Article 8 of the European Convention on Human Rights (ECHR) — rather than a dedicated provision in the EU’s data protection framework (GDPR) which relates to automated processing.

GDPR’s Article 22 includes the right for individuals not to be subject to solely automated individual decision-making where they can produce significant legal effects. But there can be some fuzziness around whether this applies if there’s a human somewhere in the loop, such as to review a decision on objection.

In this instance the court has sidestepped such questions by finding SyRI directly interferes with rights set out in the ECHR.

Specifically, the court found that the SyRI legislation fails a balancing test in Article 8 of the ECHR which requires that any social interest to be weighed against the violation of individuals’ private life — with a fair and reasonable balance being required.

In its current form the automated risk assessment system failed this test, in the court’s view.

Legal experts suggest the decision sets some clear limits on how the public sector in the UK can make use of AI tools — with the court objecting in particular to the lack of transparency about how the algorithmic risk scoring system functioned.

In a press release about the judgement (translated to English using Google Translate) the court writes that the use of SyRI is “insufficiently clear and controllable”. While, per Human Rights Watch, the Dutch government refused during the hearing to disclose “meaningful information” about how SyRI uses personal data to draw inferences about possible fraud.

The court clearly took a dim view of the state trying to circumvent scrutiny of human rights risk by pointing to an algorithmic ‘blackbox’ and shrugging.

The UN special rapporteur on extreme poverty and human rights, Philip Alston — who intervened in the case by providing the court with a human rights analysis — welcomed the judgement, describing it as “a clear victory for all those who are justifiably concerned about the serious threats digital welfare systems pose for human rights”.

“This decision sets a strong legal precedent for other courts to follow. This is one of the first times a court anywhere has stopped the use of digital technologies and abundant digital information by welfare authorities on human rights grounds,” he added in a press statement.

Back in 2018 Alston warned that the UK government’s rush to apply digital technologies and data tools to socially re-engineer the delivery of public services at scale risked having an immense impact on the human rights of the most vulnerable.

So the decision by the Dutch court could have some near-term implications for UK policy in this area.

The judgement does not shut the door on the use by states of automated profiling systems entirely — but does make it clear that in Europe human rights law must be central to the design and implementation of rights risking tools.

It also comes at a key time when EU policymakers are working on a framework to regulate artificial intelligence — with the Commission pledging to devise rules that ensure AI technologies are applied ethically and in a human-centric way.

It remains to be seen whether the Commission will push for pan-EU limits on specific public sector uses of AI — such as for social security assessments. A recent leaked draft of a white paper on AI regulation suggests it’s leaning towards risk-assessments and a patchwork of risk-based rules. 

Ethical fashion is on the rise

The fashion industry has historically relied on exploitative, unsustainable and unethical labor practices in order to sell clothes — but if recent trends are any indication, it won’t for much longer. Over the last several years, the industry has entered a remarkable period of upheaval, with major and small fashion brands alike ditching traditional methods of production in favor of eco-friendly and cruelty-free alternatives. It’s a welcome, long-overdue development, and it’s showing no signs of slowing down.

Tradition fashion is unethical in almost too many ways to count. There is, of course, the monstrous toll on animal life. Every year, over one billion animals are slaughtered for their fur or pelts, usually after living their lives in horrific factory farms.

Cows, including newborn and even unborn calves, are skinned alive in order to make leather, while animals killed for their fur are executed through anal electrocution, neck-snapping, drowning and other ghastly ways in order to avoid damaging their pelts. Even wool, traditionally perceived as a more humanely-produced animal product, involves horrors on par with those at a slaughterhouse.

But animals aren’t the only ones who suffer under the traditional fashion industry. In Cambodian garment factories, which export around $5.7 billion in clothes every year, workers earning 50 cents an hour are forced to sit for 11 hours a day straight without using the restroom, according to Human Rights Watch.

Mass faintings in oppressively hot factories are common, and workers are routinely fired for getting sick or pregnant. In Bangladesh — the world’s second-largest importer of apparel behind China — a poorly-maintained garment factory collapsed in 2013, killing 1,132 people and injuring around 2,000 others. When Cambodian garment workers protested in 2014 for better working conditions, police shot and killed three of them.

Lastly, traditional fashion is killing the planet. Every year, the textile industry alone spits out 1.2 billion tons of greenhouse gases — more than all marine shipping vessels and international flights combined — and consumes 98 million tons of oil. Textile dyeing is the second-largest polluter of clean water, and on the whole, the apparel industry accounts for 10 percent of all greenhouse emissions worldwide. Worst of all, the clothes produced by this massive resource consumption produces clothes are rapidly discarded: In 2015, 73 percent of the total material used to make clothes ended up incinerated or landfilled, according to a study by the Ellen MacArthur foundation.

Thankfully, as big and small clothing manufacturers alike are realizing, there are plenty of ways to sell fashionable clothing and accessories that don’t destroy the environment, endanger workers, or cause suffering to animals.

Vegan clothes are becoming increasingly popular, and there’s no shortage of them to choose from. Some brands, like Keep Company and Unicorn Goods, offer an expansive generalized catalogue of vegan shirts, jackets, accessories and more. Other brands are more specialized: Unreal Fur has a beautiful line of vegan faux-fur, Ahisa, Beyond Skin and SUSI Studio all sell stylish vegan shoes, and Le Buns specializes in vegan swimwear. There are upscale vegan clothing retailers, such as Brave Gentleman, as well as more casual budget options, like The Third Estate.

Strict veganism isn’t the only way to manufacture clothing ethically. Hipsters For Sisters’ products are made entirely with recycled, upcycled, or deadstocked materials, earning the approval of PETA. Reformation utilizes a carbon-neutral production process to make its clothes (and offers customers a $100 store credit if they switch to wind energy), while Stella McCartney’s entire product line is vegetarian.

GettyImages 978108544

British fashion designer Stella McCartney poses prior her presentation during the men and women’s spring/summer 2019 collection fashion show in Milan, on June 18, 2018. (Photo by MIGUEL MEDINA / AFP) (Photo credit should read MIGUEL MEDINA/AFP/Getty Images)

Many vegan clothing companies, such as In The Soulshine and Della, have found ways to sell cruelty-free clothing while also providing humane working conditions to their factories’ workers. Amanda Hearst’s Maison de Mode features a combination of Fair Trade, recycled, cruelty-free, and organic products — as well as a comprehensive labeling system to inform customers which is which.

There are plenty of small, niche companies offering ethical clothing options, but make no mistake: The transition to sustainable and ethical fashion is an industry-wide phenomenon. Well-established brands like Dr. Marten’s, Old Navy, H&M and Zara all now sell vegan clothes. Gap, Gucci, and Hugo Boss have banned fur from their stores, and three of the largest fashion conglomerates — H&M Group, Arcadia Group and Inditex — recently pledged to stop selling mohair products by 2020.

Companies are rapidly investing in new ethical alternatives to traditional clothing as well: Save The Duck’s PLUMTECH jackets feature a cruelty-free alternative to down feathers, while companies like Modern Meadow are developing new biofabricated leather made from collagen protein and other essential building blocks found in animal skin that don’t require the slaughter of any animals.

There are, of course, some holdouts. Canada Goose still traps and kills coyotes to make its fur jackets, and uses a device that’s been banned in dozens of countries for its cruelty in order to do so. As a result, its store openings regularly draw protesters.

But by and large, the trend is in the opposite direction. From up-and-coming brands to the biggest names in fashion, the industry is moving away from the destructive practices of years past and toward cleaner, ethical ways of making clothes.

It shouldn’t be a surprise. After all, being successful in fashion has always required changing with the times — and in 2019, basing an industry on labor abuse, destruction of the environment and animal torture to make their products is no longer a sustainable business model.

Apple, Google, Microsoft, WhatsApp sign open letter condemning GCHQ proposal to listen in on encrypted chats

An international coalition of civic society organizations, security and policy experts and tech companies — including Apple, Google, Microsoft and WhatsApp — has penned a critical slap-down to a surveillance proposal made last year by the UK’s intelligence agency, warning it would undermine trust and security and threaten fundamental rights.

“The GCHQ’s ghost protocol creates serious threats to digital security: if implemented, it will undermine the authentication process that enables users to verify that they are communicating with the right people, introduce potential unintentional vulnerabilities, and increase risks that communications systems could be abused or misused,” they wrire.

“These cybersecurity risks mean that users cannot trust that their communications are secure, as users would no longer be able to trust that they know who is on the other end of their communications, thereby posing threats to fundamental human rights, including privacy and free expression. Further, systems would be subject to new potential vulnerabilities and risks of abuse.”

GCHQ’s idea for a so-called ‘ghost protocol’ would be for state intelligence or law enforcement agencies to be invisibly CC’d by service providers into encrypted communications — on what’s billed as targeted, government authorized basis.

The agency set out the idea in an article published last fall on the Lawfare blog, written by the National Cyber Security Centre’s (NCSC) Ian Levy and GCHQ’s Crispin Robinson (NB: the NCSC is a public facing branch of GCHQ) — which they said was intended to open a discussion about the ‘going dark’ problem which robust encryption poses for security agencies.

The pair argued that such an “exceptional access mechanism” could be baked into encrypted platforms to enable end to end encryption to be bypassed by state agencies would could instruct the platform provider to add them as a silent listener to eavesdrop on a conversation — but without the encryption protocol itself being compromised.

“It’s relatively easy for a service provider to silently add a law enforcement participant to a group chat or call. The service provider usually controls the identity system and so really decides who’s who and which devices are involved — they’re usually involved in introducing the parties to a chat or call,” Levy and Robinson argued. “You end up with everything still being end-to-end encrypted, but there’s an extra ‘end’ on this particular communication. This sort of solution seems to be no more intrusive than the virtual crocodile clips that our democratically elected representatives and judiciary authorise today in traditional voice intercept solutions and certainly doesn’t give any government power they shouldn’t have.”

“We’re not talking about weakening encryption or defeating the end-to-end nature of the service. In a solution like this, we’re normally talking about suppressing a notification on a target’s device, and only on the device of the target and possibly those they communicate with. That’s a very different proposition to discuss and you don’t even have to touch the encryption.”

“[M]ass-scale, commodity, end-to-end encrypted services… today pose one of the toughest challenges for targeted lawful access to data and an apparent dichotomy around security,” they added.

However while encryption might technically remain intact in the scenario they sketch, their argument glosses over both the fact and risks of bypassing encryption via fiddling with authentication systems in order to enable deceptive third party snooping.

As the coalition’s letter points out, doing that would both undermine user trust and inject extra complexity — with the risk of fresh vulnerabilities that could be exploited by hackers.

Compromising authentication would also result in platforms themselves gaining a mechanism that they could use to snoop on users’ comms — thereby circumventing the wider privacy benefits provided by end to end encryption in the first place, perhaps especially when deployed on commercial messaging platforms.

So, in other words, just because what’s being asked for is not literally a backdoor in encryption that doesn’t mean it isn’t similarly risky for security and privacy and just as horrible for user trust and rights.

“Currently the overwhelming majority of users rely on their confidence in reputable providers to perform authentication functions and verify that the participants in a conversation are the people that they think they are, and only those people. The GCHQ’s ghost protocol completely undermines this trust relationship and the authentication process,” the coalition writes, also pointing out that authentication remains an active research area — and that work would likely dry up if the systems in question were suddenly made fundamentally untrustworthy on order of the state.

They further assert there’s no way for the security risk to be targeted to the individuals that state agencies want to specifically snoop on. Ergo, the added security risk is universal.

“The ghost protocol would introduce a security threat to all users of a targeted encrypted messaging application since the proposed changes could not be exposed only to a single target,” they warn. “In order for providers to be able to suppress notifications when a ghost user is added, messaging applications would need to rewrite the software that every user relies on. This means that any mistake made in the development of this new function could create an unintentional vulnerability that affects every single user of that application.”

There are more than 50 signatories to the letter in all, and others civic society and privacy rights groups Human Rights Watch, Reporters Without Borders, Liberty, Privacy International and the EFF, as well as veteran security professionals such as Bruce Schneier, Philip Zimmermann and Jon Callas, and policy experts such as former FTC CTO and Whitehouse security advisor, Ashkan Soltani .

While the letter welcomes other elements of the article penned by Levy and Robinson — which also set out a series of principles for defining a “minimum standard” governments should meet to have their requests accepted by companies in other countries (with the pair writing, for example, that “privacy and security protections are critical to public confidence” and “transparency is essential”) — it ends by urging GCHQ to abandon the ghost protocol idea altogether, and “avoid any alternative approaches that would similarly threaten digital security and human rights”.

Reached for a response to the coalition’s concerns, the NCSC sent us the following statement, attributed to Levy:

We welcome this response to our request for thoughts on exceptional access to data — for example to stop terrorists. The hypothetical proposal was always intended as a starting point for discussion.

It is pleasing to see support for the six principles and we welcome feedback on their practical application. We will continue to engage with interested parties and look forward to having an open discussion to reach the best solutions possible.

Back in 2016 the UK passed updated surveillance legislation that affords state agencies expansive powers to snoop on and hack into digital comms. And with such an intrusive regime in place it may seem odd that GCHQ is pushing for even greater powers to snoop on people’s digital chatter.

Even robust end-to-end encryption can include exploitable vulnerabilities. One bug was disclosed affecting WhatsApp just a couple of weeks ago, for example (since fixed via an update).

However in the Lawfare article the GCHQ staffers argue that “lawful hacking” of target devices is not a panacea to governments’ “lawful access requirements” because it would require governments have vulnerabilities on the shelf to use to hack devices — which “is completely at odds with the demands for governments to disclose all vulnerabilities they find to protect the population”.

“That seems daft,” they conclude.

Yet it also seems daft — and predictably so — to suggest a ‘sidedoor’ in authentication systems as an alternative to a backdoor in encrypted messaging apps.