Archives

human rights

A new senate bill would create a US data protection agency

Europe’s data protection laws are some of the strictest in the world, and have long been a thorn in the side of the data-guzzling Silicon Valley tech giants since they colonized vast swathes of the internet.

Two decades later, one Democratic senator wants to bring many of those concepts to the United States.

Sen. Kirsten Gillibrand (D-NY) has published a bill which, if passed, would create a U.S. federal data protection agency designed to protect the privacy of Americans and with the authority to enforce data practices across the country. The bill, which Gillibrand calls the Data Protection Act, will address a “growing data privacy crisis” in the U.S., the senator said.

The U.S. is one of only a few countries without a data protection law, finding it in the same company as Venezuela, Libya, Sudan and Syria. Gillibrand said the U.S. is “vastly behind” other countries on data protection.

Gillibrand said a new data protection agency would “create and meaningfully enforce” data protection and privacy rights federally.

“The data privacy space remains a complete and total Wild West, and that is a huge problem,” the senator said.

The bill comes at a time where tech companies are facing increased attention by state and federal regulators over data and privacy practices. Last year saw Facebook settle a $5 billion privacy case with the Federal Trade Commission, which critics decried for failing to bring civil charges or levy any meaningful consequences. Months later, Google settled a child privacy case that cost it $170 million — costing the search giant about a day’s worth of its revenue.

Gillibrand pointedly called out Google and Facebook for “making a whole lot of money” from their empires of data, she wrote in a Medium post. Americans “deserve to be in control of your own data,” she wrote.

At its heart, the bill would — if signed into law — allow the newly created agency to hear and adjudicate complaints from consumers and declare certain privacy invading tactics as unfair and deceptive. As the government’s “referee,” the agency would let it take point on federal data protection and privacy matters, such as launching investigations against companies accused of wrongdoing. Gillibrand’s bill specifically takes issue with “take-it-or-leave-it” provisions, notably websites that compel a user to “agree” to allowing cookies with no way to opt-out. (TechCrunch’s parent company Verizon Media enforces a ‘consent required’ policy for European users under GDPR, though most Americans never see the prompt.)

Through its enforcement arm, the would-be federal agency would also have the power to bring civil action against companies, and fine companies of egregious breaches of the law up to $1 million a day, subject to a court’s approval.

The bill would transfer some authorities from the Federal Trade Commission to the new data protection agency.

Gillibrand’s bill lands just a month after California’s consumer privacy law took effect, more than a year after it was signed into law. The law extended much of Europe’s revised privacy laws, known as GDPR, to the state. But Gillibrand’s bill would not affect state laws like California’s, her office confirmed in an email.

Privacy groups and experts have already offered positive reviews.

Caitriona Fitzgerald, policy director at the Electronic Privacy Information Center, said the bill is a “bold, ambitious proposal.” Other groups, including Color of Change and Consumer Action, praised the effort to establish a federal data protection watchdog.

Michelle Richardson, director of the Privacy and Data Project at the Center for Democracy and Technology, reviewed a summary of the bill.

“The summary seems to leave a lot of discretion to executive branch regulators,” said Richardson. “Many of these policy decisions should be made by Congress and written clearly into statute.” She warned it could take years to know if the new regime has any meaningful impact on corporate behaviors.

Gillibrand’s bill stands alone — the senator is the only sponsor on the bill. But given the appetite of some lawmakers on both sides of the aisles to crash the Silicon Valley data party, it’s likely to pick up bipartisan support in no time.

Whether it makes it to the president’s desk without a fight from the tech giants remains to be seen.

ACLU says it’ll fight DHS efforts to use app locations for deportations

The American Civil Liberties Union plans to fight newly revealed practices by the Department of Homeland Security which used commercially available cell phone location data to track suspected illegal immigrants.

“DHS should not be accessing our location information without a warrant, regardless whether they obtain it by paying or for free. The failure to get a warrant undermines Supreme Court precedent establishing that the government must demonstrate probable cause to a judge before getting some of our most sensitive information, especially our cell phone location history,” said Nathan Freed Wessler, a staff attorney with the ACLU’s Speech, Privacy, and Technology Project.

Earlier today, The Wall Street Journal reported that the Homeland Security through its Immigration and Customs Enforcement (ICE) and Customs & Border Protection (CBP) agencies were buying geolocation data from commercial entities to investigate suspects of alleged immigration violations.

The location data, which aggregators acquire from cellphone apps including games, weather, shopping, and search services, is being used by Homeland Security to detect undocumented immigrants and others entering the U.S. unlawfully, the Journal reported.

According to privacy experts interviewed by the Journal, since the data is publicly available for purchase, the government practices don’t appear to violate the law — despite being what may be the largest dragnet ever conducted by the U.S. government using the aggregated data of its citizens.

It’s also an example of how the commercial surveillance apparatus put in place by private corporations in Democratic societies can be legally accessed by state agencies to create the same kind of surveillance networks used in more authoritarian countries like China, India, and Russia.

“This is a classic situation where creeping commercial surveillance in the private sector is now bleeding directly over into government,” said Alan Butler, general counsel of the Electronic Privacy Information Center, a think tank that pushes for stronger privacy laws, told the newspaper.

Behind the government’s use of commercial data is a company called Venntel. Based in Herndon, Va., the company acts as a government contractor and shares a number of its executive staff with Gravy Analytics, a mobile-advertising marketing analytics company. In all, ICE and the CBP have spent nearly $1.3 million on licenses for software that can provide location data for cell phones. Homeland Security says that the data from these commercially available records is used to generate leads about border crossing and detecting human traffickers.

The ACLU’s Wessler has won these kinds of cases in the past. He successfully argued before the Supreme Court in the case of Carpenter v. United States that geographic location data from cellphones was a protected class of information and couldn’t be obtained by law enforcement without a warrant.

CBP explicitly excludes cell tower data from the information it collects from Venntel, according to a spokesperson for the agency told the Journal — in part because it has to under the law. The agency also said that it only access limited location data and that data is anonymized.

However, anonymized data can be linked to specific individuals by correlating that anonymous cell phone information with the real world movements of specific individuals which can be either easily deduced or tracked through other types of public records and publicly available social media.

ICE is already being sued by the ACLU for another potential privacy violation. Late last year the ACLU said that it was taking the government to court over the DHS service’s use of so-called “stingray” technology that spoofs a cell phone tower to determine someone’s location.

At the time, the ACLU cited a government oversight report in 2016 which indicated that both CBP and ICE collectively spent $13 million on buying dozens of stingrays, which the agencies used to “locate people for arrest and prosecution.”

Blackbox welfare fraud detection system breaches human rights, Dutch court rules

An algorithmic risk scoring system deployed by the Dutch state to try to predict the likelihood that social security claimants will commit benefits or tax fraud breaches human rights law, a court in the Netherlands has ruled.

The Dutch government’s System Risk Indication (SyRI) legislation uses a non-disclosed algorithmic risk model to profile citizens and has been exclusively targeted at neighborhoods with mostly low-income and minority residents. Human rights campaigners have dubbed it a ‘welfare surveillance state’.

A number of civil society organizations in the Netherlands and two citizens instigated the legal action against SyRI — seeking to block its use. The court has today ordered an immediate halt to the use of the system.

The ruling is being hailed as a landmark judgement by human rights campaigners, with the court basing its reasoning on European human rights law — specifically the right to a private life that’s set out by Article 8 of the European Convention on Human Rights (ECHR) — rather than a dedicated provision in the EU’s data protection framework (GDPR) which relates to automated processing.

GDPR’s Article 22 includes the right for individuals not to be subject to solely automated individual decision-making where they can produce significant legal effects. But there can be some fuzziness around whether this applies if there’s a human somewhere in the loop, such as to review a decision on objection.

In this instance the court has sidestepped such questions by finding SyRI directly interferes with rights set out in the ECHR.

Specifically, the court found that the SyRI legislation fails a balancing test in Article 8 of the ECHR which requires that any social interest to be weighed against the violation of individuals’ private life — with a fair and reasonable balance being required.

In its current form the automated risk assessment system failed this test, in the court’s view.

Legal experts suggest the decision sets some clear limits on how the public sector in the UK can make use of AI tools — with the court objecting in particular to the lack of transparency about how the algorithmic risk scoring system functioned.

In a press release about the judgement (translated to English using Google Translate) the court writes that the use of SyRI is “insufficiently clear and controllable”. While, per Human Rights Watch, the Dutch government refused during the hearing to disclose “meaningful information” about how SyRI uses personal data to draw inferences about possible fraud.

The court clearly took a dim view of the state trying to circumvent scrutiny of human rights risk by pointing to an algorithmic ‘blackbox’ and shrugging.

The UN special rapporteur on extreme poverty and human rights, Philip Alston — who intervened in the case by providing the court with a human rights analysis — welcomed the judgement, describing it as “a clear victory for all those who are justifiably concerned about the serious threats digital welfare systems pose for human rights”.

“This decision sets a strong legal precedent for other courts to follow. This is one of the first times a court anywhere has stopped the use of digital technologies and abundant digital information by welfare authorities on human rights grounds,” he added in a press statement.

Back in 2018 Alston warned that the UK government’s rush to apply digital technologies and data tools to socially re-engineer the delivery of public services at scale risked having an immense impact on the human rights of the most vulnerable.

So the decision by the Dutch court could have some near-term implications for UK policy in this area.

The judgement does not shut the door on the use by states of automated profiling systems entirely — but does make it clear that in Europe human rights law must be central to the design and implementation of rights risking tools.

It also comes at a key time when EU policymakers are working on a framework to regulate artificial intelligence — with the Commission pledging to devise rules that ensure AI technologies are applied ethically and in a human-centric way.

It remains to be seen whether the Commission will push for pan-EU limits on specific public sector uses of AI — such as for social security assessments. A recent leaked draft of a white paper on AI regulation suggests it’s leaning towards risk-assessments and a patchwork of risk-based rules.