Archives

law

California’s new privacy law is off to a rocky start

California’s new privacy law was years in the making.

The law, California’s Consumer Privacy Act — or CCPA — became law on January 1, allowing state residents to reclaim their right to access and control their personal data. Inspired by Europe’s GDPR, the CCPA is the largest statewide privacy law change in a generation. The new law lets users request a copy of the data that tech companies have on them, delete the data when they no longer want a company to have it, and demand that their data isn’t sold to third parties. All of this is much to the chagrin of the tech giants, some of which had spent millions to comply with the law and have many more millions set aside to deal with the anticipated influx of consumer data access requests.

But to say things are going well is a stretch.

Many of the tech giants that kicked and screamed in resistance to the new law have acquiesced and accepted their fate — at least until something different comes along. The California tech scene had more than a year to prepare, but some have made it downright difficult and — ironically — more invasive in some cases for users to exercise their rights, largely because every company has a different interpretation of what compliance should look like.

Alex Davis is just one California resident who tried to use his new rights under the law to make a request to delete his data. He vented his annoyance on Twitter, saying companies have responded to CCPA by making requests “as confusing and difficult as possible in new and worse ways.”

“I’ve never seen such deliberate attempts to confuse with design,” he told TechCrunch. He referred to what he described as “dark patterns,” a type of user interface design that tries to trick users into making certain choices, often against their best interests.

“I tried to make a deletion request but it bogged me down with menus that kept redirecting… things to be turned on and off,” he said.

Despite his frustration, Davis got further than others. Just as some companies have made it easy for users to opt-out of having their data sold by adding the legally required “Do not sell my info” links on their websites, many have not. Some have made it near-impossible to find these “data portals,” which companies set up so users can request a copy of their data or delete it altogether. For now, California companies are still in a grace period — but have until July when the CCPA’s enforcement provisions kick in. Until then, users are finding ways around it — by collating and sharing links to data portals to help others access their data.

“We really see a mixed story on the level of CCPA response right now,” said Jay Cline, who heads up consulting giant PwC’s data privacy practice, describing it as a patchwork of compliance.

PwC’s own data found that only 40% of the largest 600 U.S. companies had a data portal. Only a fraction, Cline said, extended their portals to users outside of California, even though other states are gearing up to push similar laws to the CCPA.

But not all data portals are created equally. Given how much data companies store on us — personal or otherwise — the risks of getting things wrong are greater than ever. Tech companies are still struggling to figure out the best way to verify each data request to access or delete a user’s data without inadvertently giving it away to the wrong person.

Last year, security researcher James Pavur impersonated his fiancee and tricked tech companies into turning over vast amounts of data about her, including credit card information, account logins and passwords and, in one case, a criminal background check. Only a few of the companies asked for verification. Two years ago, Akita founder Jean Yang described someone hacking into her Spotify account and requesting her account data as an “unfortunate consequence” of GDPR, which mandated companies operating on the continent allow users access to their data.

(Image: Twitter/@jeanqasaur)

The CCPA says companies should verify a person’s identity to a “reasonable degree of certainty.” For some that’s just an email address to send the data.

Others require sending in even more sensitive information just to prove it’s them.

Indeed, i360, a little-known advertising and data company, until recently asked California residents for a person’s full Social Security number. This recently changed to just the last four-digits. Verizon (which owns TechCrunch) wants its customers and users to upload their driver’s license or state ID to verify their identity. Comcast asks for the same, but goes the extra step by asking for a selfie before it will turn over any of a customer’s data.

Comcast asks for the same amount of information to verify a data request as the controversial facial recognition startup, Clearview AI, which recently made headlines for creating a surveillance system made up of billions of images scraped from Facebook, Twitter and YouTube to help law enforcement trace a person’s movements.

As much as CCPA has caused difficulties, it has helped forge an entirely new class of compliance startups ready to help large and small companies alike handle the regulatory burdens to which they are subject. Several startups in the space are taking advantage of the $55 billion expected to be spent on CCPA compliance in the next year — like Segment, which gives customers a consolidated view of the data they store; Osano which helps companies comply with CCPA; and Securiti, which just raised $50 million to help expand its CCPA offering. With CCPA and GDPR under their belts, their services are designed to scale to accommodate new state or federal laws as they come in.

Another startup, Mine, which lets users “take ownership” of their data by acting as a broker to allow users to easily make requests under CCPA and GDPR, had a somewhat bumpy debut.

The service asks users to grant them access to a user’s inbox, scanning for email subject lines that contain company names and using that data to determine which companies a user can request their data from or have their data deleted. (The service requests access to a user’s Gmail but the company claims it will “never read” users’ emails.) Last month during a publicity push, Mine inadvertently copied a couple of emailed data requests to TechCrunch, allowing us to see the names and email addresses of two requesters who wanted Crunch, a popular gym chain with a similar name, to delete their data.

(Screenshot: Zack Whittaker/TechCrunch)

TechCrunch alerted Mine — and the two requesters — to the security lapse.

“This was a mix-up on our part where the engine that finds companies’ data protection offices’ addresses identified the wrong email address,” said Gal Ringel, co-founder and chief executive at Mine. “This issue was not reported during our testing phase and we’ve immediately fixed it.”

For now, many startups have caught a break.

The smaller, early-stage startups that don’t yet make $25 million in annual revenue or store the personal data on more than 50,000 users or devices will largely escape having to immediately comply with CCPA. But it doesn’t mean startups can be complacent. As early-stage companies grow, so will their legal responsibilities.

“For those who did launch these portals and offer rights to all Americans, they are in the best position to be ready for these additional states,” said Cline. “Smaller companies in some ways have an advantage for compliance if their products or services are commodities, because they can build in these controls right from the beginning,” he said.

CCPA may have gotten off to a bumpy start, but time will tell if things get easier. Just this week, California’s attorney general Xavier Becerra released newly updated guidance aimed at trying to “fine tune” the rules, per his spokesperson. It goes to show that even California’s lawmakers are still trying to get the balance right.

But with the looming threat of hefty fines just months away, time is running out for the non-compliant.

Blackbox welfare fraud detection system breaches human rights, Dutch court rules

An algorithmic risk scoring system deployed by the Dutch state to try to predict the likelihood that social security claimants will commit benefits or tax fraud breaches human rights law, a court in the Netherlands has ruled.

The Dutch government’s System Risk Indication (SyRI) legislation uses a non-disclosed algorithmic risk model to profile citizens and has been exclusively targeted at neighborhoods with mostly low-income and minority residents. Human rights campaigners have dubbed it a ‘welfare surveillance state’.

A number of civil society organizations in the Netherlands and two citizens instigated the legal action against SyRI — seeking to block its use. The court has today ordered an immediate halt to the use of the system.

The ruling is being hailed as a landmark judgement by human rights campaigners, with the court basing its reasoning on European human rights law — specifically the right to a private life that’s set out by Article 8 of the European Convention on Human Rights (ECHR) — rather than a dedicated provision in the EU’s data protection framework (GDPR) which relates to automated processing.

GDPR’s Article 22 includes the right for individuals not to be subject to solely automated individual decision-making where they can produce significant legal effects. But there can be some fuzziness around whether this applies if there’s a human somewhere in the loop, such as to review a decision on objection.

In this instance the court has sidestepped such questions by finding SyRI directly interferes with rights set out in the ECHR.

Specifically, the court found that the SyRI legislation fails a balancing test in Article 8 of the ECHR which requires that any social interest to be weighed against the violation of individuals’ private life — with a fair and reasonable balance being required.

In its current form the automated risk assessment system failed this test, in the court’s view.

Legal experts suggest the decision sets some clear limits on how the public sector in the UK can make use of AI tools — with the court objecting in particular to the lack of transparency about how the algorithmic risk scoring system functioned.

In a press release about the judgement (translated to English using Google Translate) the court writes that the use of SyRI is “insufficiently clear and controllable”. While, per Human Rights Watch, the Dutch government refused during the hearing to disclose “meaningful information” about how SyRI uses personal data to draw inferences about possible fraud.

The court clearly took a dim view of the state trying to circumvent scrutiny of human rights risk by pointing to an algorithmic ‘blackbox’ and shrugging.

The UN special rapporteur on extreme poverty and human rights, Philip Alston — who intervened in the case by providing the court with a human rights analysis — welcomed the judgement, describing it as “a clear victory for all those who are justifiably concerned about the serious threats digital welfare systems pose for human rights”.

“This decision sets a strong legal precedent for other courts to follow. This is one of the first times a court anywhere has stopped the use of digital technologies and abundant digital information by welfare authorities on human rights grounds,” he added in a press statement.

Back in 2018 Alston warned that the UK government’s rush to apply digital technologies and data tools to socially re-engineer the delivery of public services at scale risked having an immense impact on the human rights of the most vulnerable.

So the decision by the Dutch court could have some near-term implications for UK policy in this area.

The judgement does not shut the door on the use by states of automated profiling systems entirely — but does make it clear that in Europe human rights law must be central to the design and implementation of rights risking tools.

It also comes at a key time when EU policymakers are working on a framework to regulate artificial intelligence — with the Commission pledging to devise rules that ensure AI technologies are applied ethically and in a human-centric way.

It remains to be seen whether the Commission will push for pan-EU limits on specific public sector uses of AI — such as for social security assessments. A recent leaked draft of a white paper on AI regulation suggests it’s leaning towards risk-assessments and a patchwork of risk-based rules. 

Ancestry lays off 6% of staff as consumer genetic testing market continues to decline

Excitement in the consumer genetic testing market continues to show signs of slowing down.

In the past two weeks layoffs have hit two of the biggest consumer genetic testing services — 23andme and Ancestry — with the latter announcing that it would slash its staff by 6% earlier today, in a blog post.

CNBC first reported the news.

In her blogpost announcing the layoffs, Ancestry chief executive Margo Georgiadis wrote:

… over the last 18 months, we have seen a slowdown in consumer demand across the entire DNA category. The DNA market is at an inflection point now that most early adopters have entered the category. Future growth will require a continued focus on building consumer trust and innovative new offerings that deliver even greater value to people. Ancestry is well positioned to lead that innovation to inspire additional discoveries in both Family History and Health.

Today we made targeted changes to better position our business to these marketplace realities. These are difficult decisions and impact 6 percent of our workforce. Any changes that affect our people are made with the utmost care. We’ve done so in service to sharpening our focus and investment on our core Family History business and the long-term opportunity with AncestryHealth.

The move from Ancestry follows job cuts at 23andMe in late January, which saw 100 staffers lose their jobs (or roughly 14% of its workforce.

The genetic testing company Illumina has been warning of softness in the direct to consumer genetic testing market, as Business Insider reported last August.

“We have previously based our DTC expectations on customer forecasts, but given unanticipated market softness, we are taking an even more cautious view of the opportunity in the near-term,” the company’s chief executive Francis deSouza said in a second quarter earnings call.

Consumers seem to be waking up to the privacy concerns over how genetic tests can be used.

“You can cancel your credit card. You can’t change your DNA,” Matt Mitchell, the director of digital safety and privacy for the advocacy organization Tactical Tech, told Business Insider earlier in the year.

And privacy laws in the U.S. have not caught up with the reality of how DNA testing is being used (and could potentially be abused), according to privacy experts and legal scholars.

“In the US we have taken to protecting genetic information separately rather than using more general privacy laws, and most of the people who’ve looked at it have concluded that’s a really bad idea,” Mark Rothstein, a law professor at Brandeis and the director of the University of Louisville’s Institute for Bioethics, Health Policy and Law, told Wired in May.

The investigation into the “Golden State Killer” and the eventual arrest of Joseph James DeAngelo thanks to DNA evidence collected from an open source genealogy site called GEDMatch likely helped focus consumers thinking on the issue.

In that case a relative of DeAngelo’s had uploaded their information onto the site and investigators found a close match with DNA at the crime scene. That information was then correlated with other details to eventually center on DeAngelo as a suspect in the crimes.

While consumer genetic testing services may be struggling, investors still see increasing promise in clinical genetics testing, with companies like the publicly traded InVitae seeing its share price rally and the privately held company, Color, raising roughly $75 million in new capital from investors led by T. Rowe Price.