Archives

terms of service

U.S. government reportedly in talks with tech companies on how to use location data in COVID-19 fight

U.S. government officials are currently in discussion with a number of tech companies, including Facebook and Google, around how data from cell phones might provide methods for combatting the ongoing coronavirus pandemic, according to a new Washington Post report. The talks also include health experts tracking the pandemic and its transmission, and one possible way in which said data could be useful is through aggregated, anonymized location data, per the report’s sources.

Location data taken from the smartphones of Americans could help public health experts track and map the general spread of the infection, the group has theorized, though of course the prospect of any kind of location tracking is bound to leave people uncomfortable, especially when it’s done at scale and involves not only private companies with which they have a business relationship, but also the government.

These efforts, however, would be strictly aimed at helping organizations like the Centers for Disease Control and Prevention (CDC) get an overview of patterns, decoupled from any individual user identity. The Post’s sources stress that this would not involve the generation of any kind of government database, and would instead focus on anodized, aggregated data to inform modelling of the COVID-19 transmission and spread.

Already, we’ve seen unprecedented collaboration among some of the largest tech companies in the world on matters related to the coronavirus pandemic. Virtually every large tech company that operates a product involved in information dissemination came together on Monday to issue a statement about working closely together in order to fight the spread of fraud and disinformation about the virus.

The White House has also been consulting with tech companies around the virus and the U.S. response, including via a meeting last week that included Amazon, Apple, Facebook, Google, Microsoft and Twitter, and Amazon CEO Jeff Bezos has been in regular contact with the current administration as his company is increasingly playing a central and important role in how people are dealing with essentially global guidelines of isolation, social distancing, quarantine and even shelter-in-place orders.

Earlier this week, an open letter co-signed by a lengthy list of epidemiologists, excecutives, physicians and academics also sought to outline what tech companies could contribute to the ongoing effort to stem the COVID-19 pandemic, and one of the measures suggested (directed at mobile OS providers Apple and Google specifically) is an “opt-in, privacy preserving OS feature to support contact tracing” for individuals who might have been exposed to someone with the virus.

Of course, regardless of assurances to the contrary, it’s natural to be suspicious of any widespread effort to collect personal data. Especially when it’s historically been the case that in times of extreme duress, people have made trade-offs about personal freedoms and protections that have subsequently backfired. The New York Times also reported this week on an initiative to track the location data of people who have contracted the virus using an existing, previously undisclosed database of cellphone data from Israeli cellphone selfie providers and their customers.

Still, there’s good reason not to instantly dismiss the idea of trying to find some kind of privacy-protecting way of harnessing the information available to tech companies, since it does seem like a way to potentially provide a lot of benefit – particularly when it comes to measuring the impact of social distancing measures currently in place.

Australia sues Facebook over Cambridge Analytica, fine could scale to $529BN

Australia’s privacy watchdog is suing Facebook over the Cambridge Analytica data breach — which, back in 2018, became a global scandal that wiped billions off the tech giant’s share price yet only led to Facebook picking up a $5BN FTC fine.

Should Australia prevail in its suit against the tech giant the monetary penalty could be exponentially larger.

Australia’s Privacy Act sets out a provision for a civil penalty of up to $1,700,000 to be levied per contravention — and the national watchdog believes there were 311,074 local Facebook users in the cache of ~86M profiles lifted by Cambridge Analytica . So the potential fine here is circa $529BN. (A very far cry from the £500k Facebook paid in the UK over the same data misuse scandal.)

In a statement published on its website today the Office of the Australian Information Commissioner (OAIC) says it has lodged proceedings against Facebook in a federal court alleging the company committed serious and/or repeated interferences with privacy.

The suit alleges the personal data of Australian Facebook users was disclosed to the This is Your Digital Life app for a purpose other than that for which it was collected — thereby breaching Australia’s Privacy Act 1988. It further claims the data was exposed to the risk of being disclosed to Cambridge Analytica and used for political profiling purposes, and passed to other third parties.

This is Your Digital Life was an app built by an app developer called GSR that was hired by Cambridge Analytica to obtain and process Facebook users’ data for political ad targeting purposes.

The events from which the suit stems took place on Facebook’s platform between March 2014 and May 2015 when user data was being siphoned off by GSR, under contract with Cambridge Analytica — which worked with US political campaigns, including Ted Cruz’s presidential campaign and later (the now) president Donald Trump.

GSR was co-founded by two psychology researchers, Aleksandr Kogan and Joseph Chancellor. And in a still unexplained twist in the saga, Facebook hired Chancellor, in about November 2015, which was soon after some of its own staffers had warned internally about the “sketchy” business Cambridge Analytica was conducting on its ad platform. Chancellor has never spoken to the press and subsequently departed Facebook as quietly and serendipitously as he arrived.

In a concise statement summing up its legal action against Facebook the OIAC writes:

Facebook disclosed personal information of the Affected Australian Individuals. Most of those individuals did not install the “This is Your Digital Life” App; their Facebook friends did. Unless those individuals undertook a complex process of modifying their settings on Facebook, their personal information was disclosed by Facebook to the “This is Your Digital Life” App by default. Facebook did not adequately inform the Affected Australian Individuals of the manner in which their personal information would be disclosed, or that it could be disclosed to an app installed by a friend, but not installed by that individual.

Facebook failed to take reasonable steps to protect those individuals’ personal information from unauthorised disclosure. Facebook did not know the precise nature or extent of the personal information it disclosed to the “This is Your Digital Life” App. Nor did it prevent the app from disclosing to third parties the personal information obtained. The full extent of the information disclosed, and to whom it was disclosed, accordingly cannot be known. What is known, is that Facebook disclosed the Affected Australian Individuals’ personal information to the “This is Your Digital Life” App, whose developers sold personal information obtained using the app to the political consulting firm Cambridge Analytica, in breach of Facebook’s policies.

As a result, the Affected Australian Individuals’ personal information was exposed to the risk of disclosure, monetisation and use for political profiling purposes.

Commenting in a statement, Australia’s information commissioner and privacy commissioner, Angelene Falk, added: “All entities operating in Australia must be transparent and accountable in the way they handle personal information, in accordance with their obligations under Australian privacy law. We consider the design of the Facebook platform meant that users were unable to exercise reasonable choice and control about how their personal information was disclosed.

“Facebook’s default settings facilitated the disclosure of personal information, including sensitive information, at the expense of privacy. We claim these actions left the personal data of around 311,127 Australian Facebook users exposed to be sold and used for purposes including political profiling, well outside users’ expectations.”

Reached for comment, a Facebook spokesperson sent this statement:

We’ve actively engaged with the OAIC over the past two years as part of their investigation. We’ve made major changes to our platforms, in consultation with international regulators, to restrict the information available to app developers, implement new governance protocols and build industry-leading controls to help people protect and manage their data. We’re unable to comment further as this is now before the Federal Court.

CCPA won’t be enough to fix tech’s data entitlement problem

When the California Consumer Privacy Act (CCPA) rolled out on January 1st, many companies were still scrambling to become compliant with the data privacy regulation, which is estimated to cost businesses $55 billion. But even checking all of the compliance boxes isn’t enough to safeguard consumer data. The past few years of rampant breaches and data misuse have shown how quickly personal details can fall into the wrong hands. They’ve also shown how often simple user error enabled by poor data practices leads to big consequences.

The way to solve this issue isn’t solely through legislation — it’s companies taking a hard look at their behavior and processes. Laws like CCPA and GDPR help set the groundwork for change, but they don’t address the broader issue: businesses feel entitled to people’s data even when it’s not part of their core product offering and have encoded that entitlement into their processes.

Legislated and top-down calls for accountability won’t fix the problem on their own. To protect consumers, companies need to architect internal systems around data custodianship rather than data ownership. Doing so will establish processes that not only hit compliance benchmarks but make responsible data handling the default action.

Privacy compliance over true procedural change is a cop-out

The prevailing philosophy in Silicon Valley is one of data ownership, which impacts how consumers’ personal information is used. The consequences have been widely reported on everything from the revelations surrounding Cambridge Analytica to Uber’s 57-million-user data breach. Tech companies are losing the trust of customers, partners and governments around the world. In fact, Americans’ perception of tech companies has steadily dropped since 2015. More must be done to win it back.

Companies that rely on regulations like CCPA and GDPR to guide their data policies essentially ask someone else to draw the line for them, so they can come as close to it as possible — which leads to a “check-the-box” approach to compliance rather than a core philosophy that prioritizes the privacy expectations of their customers. If tech and security leaders build data policies with privacy in mind, we won’t have to spend valuable resources meeting government regulations.

How to take the entitlement out of data handling

Responsible, secure data handling is achievable for every company. The most important step is for businesses to go beyond the bare minimum when reevaluating their data access processes. What’s been most helpful for the companies I’ve worked with is organizing these practices around a simple idea: You can’t lose what you don’t have.

In practice, this idea is known as the Principle of Least Privilege, whereby companies give employees only the data access they need to do their jobs effectively. Here’s an example that applies to most customer-facing businesses out there: Say I’m a customer service rep and a person calls me about a problem with their account. If I operate according to the Principle of Least Privilege, the following data access rules would apply:

  1. I would only have access to that specific customer’s account information;
  2. I would only have access to the specific part of their account where the problem is happening;
  3. I would only have access until the problem is solved.

Sounds intuitive, right? Yet, many companies — particularly those operating without the Principle of Least Privilege in place — discovered through the GDPR and CCPA compliance process that their data access controls did not work this way. This is how major breaches happen. An employee downloads an entire database — much more data than they need to perform a specific task — their laptop is compromised, and suddenly hackers can access the entire database.

POLP works because it introduces a bit of friction into the data-request process. The goal here is to make the right decision easy and the wrong decision harder, so everyone is intentional about their data use. How a company achieves this will differ based on their business model and growth stage. One option is to have only a single database with an added layer of infrastructure that grants data access through POLP rules.

Alternatively, companies can work these rules into their CRM software. In the example I mentioned, the system would grant data access to a rep only when it recognizes a corresponding customer support case. If an employee tries to access data that is not directly tied to a customer problem, they would encounter an additional login step like two-factor authentication.

There’s no one-size-fits-all approach; rather, data access should operate on a spectrum. For one business, it may mean limiting data access to a single business account and the related set of customer information. At another company, an engineer may need access to multiple customers’ information to fix a product issue. When this happens, the data access should be both time-bound and highly visible, so that other employees can see how the data is used. There may also be times when an employee needs to access data in the aggregate to do their job — for example, to run a report. In this case, the data should always be anonymized.

Protecting consumer data is a moral obligation, not just a legal one

The power of privacy-focused data processes and a system like the Principle of Least Privilege is that, by design, they guide employees to use data with the customer’s best interest in mind. The Golden Rule should apply: We each must treat consumer data in the way we’d want our own data used. With the right functional procedures in place, infrastructure can make responsible data access intuitive.

No company is entitled to data; they are entrusted with it. Consumers must be aware of how their data is treated and hold companies accountable. Regulations like CCPA make this easier, but businesses must uphold their end of the bargain.

Trust, not data, is the most valuable currency for businesses today. But current data practices do nothing to earn that trust and we can’t count on regulation alone to change that. Only practices built with privacy and transparency in mind can bring back customer trust and keep personal data protected.