Archives

law

Cookie consent still a compliance trash-fire in latest watchdog peek

The latest confirmation of the online tracking industry’s continued flouting of EU privacy laws which — at least on paper — are supposed to protect citizens from consent-less digital surveillance comes by via Ireland’s Data Protection Commission (DPC).

The watchdog did a sweep survey of around 40 popular websites last year — covering sectors including media and publishing; retail; restaurants and food ordering services; insurance; sport and leisure; and the public sector — and in a new report, published yesterday, it found almost all failing on a number of cookie and tracking compliance issues, with breaches ranging from minor to serious.

Twenty were graded ‘amber’ by the regulator, which signals a good response and approach to compliance but with at least one serious concern identified; twelve were graded ‘red’, based on very poor quality responses and a plethora of bad practices around cookie banners, setting multiple cookies without consent, badly designed cookies policies or privacy policies, and a lack of clarity about whether they understood the purposes of the ePrivacy legislation; while a further three got a borderline ‘amber to red’ grade.

Just two of the 38 controllers got a ‘green’ rating (substantially compliance with any concerns straightforward and easily remedied); and one more got a borderline ‘green to amber’ grade.

EU law means that if a data controller is relying on consent as the legal basis for tracking a user the consent must be specific, informed and freely given. Additional court rulings last year have further finessed guidance around online tracking — clarifying pre-checked consent boxes aren’t valid, for example.

Yet the DPC still found examples of cookie banners that offer no actual choice at all. Such as those which serve a dummy banner with a cookie notice that users can only meaningless click ‘Got it!’. (‘Gotcha data’ more like.. )

In fact the watchdog writes that it found ‘implied’ consent being relied upon by around two-thirds of the controllers, based on the wording of their cookie banners (e.g. notices such as: “by continuing to browse this site you consent to the use of cookies”) — despite this no longer meeting the required legal standard.

“Some appeared to be drawing on older, but no longer extant, guidance published by the DPC that indicated consent could be obtained ‘by implication’, where such informational notices were put in place,” it writes, noting that current guidance on its website “does not make any reference to implied consent, but it also focuses more on user controls for cookies rather than on controller obligations”.

Another finding was that all but one website set cookies immediately on landing — with “many” of these found to have no legal justification for not asking first, as the DPC determined they fall outside available consent exemptions in the relevant regulations.

It also identified widespread abuse of the concept of ‘strictly necessary’ where the use of trackers are concerned. “Many controllers categorised the cookies deployed on their websites as having a ‘necessary’ or ‘strictly necessary’ function, where the stated function of the cookie appeared to meet neither of the two consent exemption criteria set down in the ePrivacy Regulations/ePrivacy Directive,” it writes in the report. “These included cookies used to establish chatbot sessions that were set prior to any request by the user to initiate a chatbot function. In some cases, it was noted that the chatbot function on the websites concerned did not work at all.

“It was clear that some controllers may either misunderstand the ‘strictly necessary’ criteria, or that their definitions of what is strictly necessary are rather more expansive than the definitions provided in Regulation 5(5),” it adds.

Another problem the report highlights is a lack of tools for users to vary or withdraw their consent choices, despite some of the reviewed sites using so called ‘consent management platforms’ (CMPs) sold by third-party vendors.

This chimes with a recent independent study of CPMs — which earlier this year found illegal practices to be widespread, with “dark patterns and implied consent… ubiquitous”, as the researchers put it.

“Badly designed — or potentially even deliberately deceptive — cookie banners and consent-management tools were also a feature on some sites,” the DPC writes in its report, detailing some examples of Quantcast’s CPM which had been implemented in such a way as to make the interface “confusing and potentially deceptive” (such as unlabelled toggles and a ‘reject all’ button that had no effect).

Pre-checked boxes/sliders were also found to be common, with the DPC finding ten of the 38 controllers used them — despite ‘consent’ collected like that not actually being valid consent.

“In the case of most of the controllers, consent was also ‘bundled’ — in other words, it was not possible for users to control consent to the different purposes for which cookies were being used,” the DPC also writes. “This is not permitted, as has been clarified in the Planet49 judgment. Consent does not need to be given for each cookie, but rather for each purpose. Where a cookie has more than one purpose requiring consent, it must be obtained for all of those purposes separately.”

In another finding, the regulator came across instances of websites that had embedded tracking technologies, such as Facebook pixels, yet their operators did not list these in responses to the survey, listing only http browser cookies instead. The DPC suggests this indicates some controllers aren’t even aware of trackers baked into their own sites.

“It was not clear, therefore, whether some controllers were aware of some of the tracking elements deployed on their websites — this was particularly the case where small controllers had outsourced their website management and development to a third-part,” it writes.

The worst sector of its targeted sweep — in terms of “poor practices and, in particular, poor understanding of the ePrivacy Regulations and their purpose” — was the restaurants and food-ordering sector, per the report. (Though the finding is clearly based on a small sampling across multiple sectors.)

Despite encountering near blanket failure to actually comply with the law, the DPC, which also happens to be the lead regulator for much of big tech in Europe, has responded by issuing, er, further guidance.

This includes specifics such as pre-checked consent boxes must be removed; cookie banners can’t be designed to ‘nudge’ users to accept and a reject option must have equal prominence; and no non-necessary cookies be set on landing. It also stipulates there must always be a way for users to withdraw consent — and doing so should be as easy as consenting.

All stuff that’s been clear and increasingly so at least since the GDPR came into application in May 2018. Nonetheless the regulator is giving the website operators in question a further six months’ grace to get their houses in order — after which it has raised the prospect of actually enforcing the EU’s ePrivacy Directive and the General Data Protection Regulation.

“Where controllers fail to voluntarily make changes to their user interfaces and/or their processing, the DPC has enforcement options available under both the ePrivacy Regulations and the GDPR and will, where necessary, examine the most appropriate enforcement options in order to bring controllers into compliance with the law,” it warns.

The report is just the latest shot across the bows of the online tracking industry in Europe.

The UK’s Information Commission’s Office (ICO) has been issuing sternly worded blog posts for months. Its own report last summer found illegal profiling of Internet users by the programmatic ad industry to be rampant — also giving the industry six months to reform.

However the ICO still hasn’t done anything about the adtech industry’s legal blackhole — leading to privacy experts to denouncing the lack of any “substantive action to end the largest data breach ever recorded in the UK”, as one put it at the start of this year.

Ireland’s DPC, meanwhile, has yet to put the decision trigger on multiple cross-border investigations into the data-mining business practices of tech giants including Facebook and Google, following scores of GDPR complaints — including several targeting their legal base to process people’s data.

A two-year review of the pan-EU regulation, set for May 2020, provides one hard deadline that might concentrate minds.

Google’s new T&Cs include a Brexit ‘easter egg’ for UK users

Google has buried a major change in legal jurisdiction for its UK users as part of a wider update to its terms and conditions that’s been announced today and which it says is intended to make its conditions of use clearer for all users.

It says the update to its T&Cs is the first major revision since 2012 — with Google saying it wanted to ensure the policy reflects its current products and applicable laws.

Google says it undertook a major review of the terms, similar to the revision of its privacy policy in 2018, when the EU’s General Data Protection Regulation started being applied. But while it claims the new T&Cs are easier for users to understand — rewritten using simpler language and a clearer structure — there are no other changes involved, such as to how it handles people’s data.

“We’ve updated our Terms of Service to make them easier for people around the world to read and understand — with clearer language, improved organization, and greater transparency about changes we make to our services and products. We’re not changing the way our products work, or how we collect or process data,” Google spokesperson Shannon Newberry said in a statement.

Users of Google products are being asked to review and accept the new terms before March 31 when they are due to take effect.

Reuters reported on the move late yesterday — citing sources familiar with the update who suggested the change of jurisdiction for UK users will weaken legal protections around their data.

However Google disputes there will be any change in privacy standards for UK users as a result of the shift. it told us there will be no change to how it process UK users’ data; no change to their privacy settings; and no change to the way it treats their information as a result of the move.

We asked the company for further comment on this — including why it chose not to make a UK subsidiary the legal base for UK users — and a spokesperson told us it is making the change as part of its preparations for the UK to leave the European Union (aka Brexit).

Like many companies, we have to prepare for Brexit,” Google said. “Nothing about our services or our approach to privacy will change, including how we collect or process data, and how we respond to law enforcement demands for users’ information. The protections of the UK GDPR will still apply to these users.”

Heather Burns, a tech policy specialist based in Glasgow, Scotland — who runs a website dedicated to tracking UK policy shifts around the Brexit process — also believes Google has essentially been forced to make the move because the UK government has recently signalled its intent to diverge from European Union standards in future, including on data protection.

“What has changed since January 31 has been [UK prime minister] Boris Johnson making a unilateral statement that the UK will go its own way on data protection, in direct contrast to everything the UK’s data protection regulator and government has said since the referendum,” she told us. “These bombastic, off-the-cuff statements play to his anti-EU base but businesses act on them. They have to.”

“Google’s transfer of UK accounts from the EU to the US is an indication that they do not believe the UK will either seek or receive a data protection adequacy agreement at the end of the transition period. They are choosing to deal with that headache now rather than later. We shouldn’t underestimate how strong a statement this is from the tech sector regarding its confidence in the Johnson premiership,” she added.

Asked whether she believes there will be a reduction in protections for UK users in future as a result of the shift Burns suggested that will largely depend on Google.

So — in other words — Brexit means, er, trust Google to look after your data.

“The European data protection framework is based around a set of fundamental user rights and controls over the uses of personal data — the everyday data flows to and from all of our accounts. Those fundamental rights have been transposed into UK domestic law through the Data Protection Act 2018, and they will stay, for now. But with the Johnson premiership clearly ready to jettison the European-derived system of user rights for the US-style anything goes model,” Burns suggested.

“Google saying there is no change to the way we process users’ data, no change to their privacy settings and no change to the way we treat their information can be taken as an indication that they stand willing to continue providing UK users with European-style rights over their data — albeit from a different jurisdiction — regardless of any government intention to erode the domestic legal basis for those rights.”

Reuters’ report also raises concerns about the impact of the Cloud Act agreement between the UK and the US — which is due to come into effect this summer — suggesting it will pose a threat to the safety of UK Google users’ data once it’s moved out of an EU jurisdiction (in this case Ireland) to the US where the Act will apply.

The Cloud Act is intended to make it quicker and easier for law enforcement to obtain data stored in the cloud by companies based in the other legal jurisdiction.

So in future, it might be easier for UK authorities to obtain UK Google users’ data using this legal instrument applied to Google US.

It certainly seems clear that as the UK moves away from EU standards as a result of Brexit it is opening up the possibility of the country replacing long-standing data protection rights for citizens with a regime of supercharged mass surveillance. (The UK government has already legislated to give its intelligence agencies unprecedented powers to snoop on ordinary citizens’ digital comms — so it has a proven appetite for bulk data.)

Again, Google told us the shift of legal base for its UK users will make no difference to how it handles law enforcement requests — a process it talks about here — and further claimed this will be true even when the Cloud Act applies. Which is a weasely way of saying it will do exactly what the law requires.

Google confirmed that GDPR will continue to apply for UK users during the transition period between the old and new terms. After that it said UK data protection law will continue to apply — emphasizing that this is modelled after the GDPR. But of course in the post-Brexit future the UK government might choose to model it after something very different.

Asked to confirm whether it’s committing to maintain current data standards for UK users in perpetuity, the company told us it cannot speculate as to what privacy laws the UK will adopt in the future… 😬

We also asked why it hasn’t chosen to elect a UK subsidiary as the legal base for UK users. To which it gave a nonsensical response — saying this is because the UK is no longer in the EU. Which begs the question when did the UK suddenly become the 51st American State?

Returning to the wider T&Cs revision, Google said it’s making the changes in a response to litigation in the European Union targeted at its terms.

This includes a case in Germany where consumer rights groups successfully sued the tech giant over its use of overly broad terms which the court agreed last year were largely illegal.

In another case a year ago in France a court ordered Google to pay €30,000 for unfair terms — and ordered it to obtain valid consent from users for tracking their location and online activity.

Since at least 2016 the European Commission has also been pressuring tech giants, including Google, to fix consumer rights issues buried in their T&Cs — including unfair terms. A variety of EU laws apply in this area.

In another change being bundled with the new T&Cs Google has added a description about how its business works to the About Google page — where it explains its business model and how it makes money.

Here, among the usual ‘dead cat’ claims about not ‘selling your information’ (tl;dr adtech giants rent attention; they don’t need to sell actual surveillance dossiers), Google writes that it doesn’t use “your emails, documents, photos or confidential information (such as race, religion or sexual orientation) to personalize the ads we show you”.

Though it could be using all that personal stuff to help it build new products it can serve ads alongside.

Even further towards the end of its business model screed it includes the claim that “if you don’t want to see personalized ads of any kind, you can deactivate them at any time”. So, yes, buried somewhere in Google’s labyrinthine setting exists an opt out.

The change in how Google articulates its business model comes in response to growing political and regulatory scrutiny of adtech business models such as Google’s — including on data protection and antitrust grounds.

California’s new privacy law is off to a rocky start

California’s new privacy law was years in the making.

The law, California’s Consumer Privacy Act — or CCPA — became law on January 1, allowing state residents to reclaim their right to access and control their personal data. Inspired by Europe’s GDPR, the CCPA is the largest statewide privacy law change in a generation. The new law lets users request a copy of the data that tech companies have on them, delete the data when they no longer want a company to have it, and demand that their data isn’t sold to third parties. All of this is much to the chagrin of the tech giants, some of which had spent millions to comply with the law and have many more millions set aside to deal with the anticipated influx of consumer data access requests.

But to say things are going well is a stretch.

Many of the tech giants that kicked and screamed in resistance to the new law have acquiesced and accepted their fate — at least until something different comes along. The California tech scene had more than a year to prepare, but some have made it downright difficult and — ironically — more invasive in some cases for users to exercise their rights, largely because every company has a different interpretation of what compliance should look like.

Alex Davis is just one California resident who tried to use his new rights under the law to make a request to delete his data. He vented his annoyance on Twitter, saying companies have responded to CCPA by making requests “as confusing and difficult as possible in new and worse ways.”

“I’ve never seen such deliberate attempts to confuse with design,” he told TechCrunch. He referred to what he described as “dark patterns,” a type of user interface design that tries to trick users into making certain choices, often against their best interests.

“I tried to make a deletion request but it bogged me down with menus that kept redirecting… things to be turned on and off,” he said.

Despite his frustration, Davis got further than others. Just as some companies have made it easy for users to opt-out of having their data sold by adding the legally required “Do not sell my info” links on their websites, many have not. Some have made it near-impossible to find these “data portals,” which companies set up so users can request a copy of their data or delete it altogether. For now, California companies are still in a grace period — but have until July when the CCPA’s enforcement provisions kick in. Until then, users are finding ways around it — by collating and sharing links to data portals to help others access their data.

“We really see a mixed story on the level of CCPA response right now,” said Jay Cline, who heads up consulting giant PwC’s data privacy practice, describing it as a patchwork of compliance.

PwC’s own data found that only 40% of the largest 600 U.S. companies had a data portal. Only a fraction, Cline said, extended their portals to users outside of California, even though other states are gearing up to push similar laws to the CCPA.

But not all data portals are created equally. Given how much data companies store on us — personal or otherwise — the risks of getting things wrong are greater than ever. Tech companies are still struggling to figure out the best way to verify each data request to access or delete a user’s data without inadvertently giving it away to the wrong person.

Last year, security researcher James Pavur impersonated his fiancee and tricked tech companies into turning over vast amounts of data about her, including credit card information, account logins and passwords and, in one case, a criminal background check. Only a few of the companies asked for verification. Two years ago, Akita founder Jean Yang described someone hacking into her Spotify account and requesting her account data as an “unfortunate consequence” of GDPR, which mandated companies operating on the continent allow users access to their data.

(Image: Twitter/@jeanqasaur)

The CCPA says companies should verify a person’s identity to a “reasonable degree of certainty.” For some that’s just an email address to send the data.

Others require sending in even more sensitive information just to prove it’s them.

Indeed, i360, a little-known advertising and data company, until recently asked California residents for a person’s full Social Security number. This recently changed to just the last four-digits. Verizon (which owns TechCrunch) wants its customers and users to upload their driver’s license or state ID to verify their identity. Comcast asks for the same, but goes the extra step by asking for a selfie before it will turn over any of a customer’s data.

Comcast asks for the same amount of information to verify a data request as the controversial facial recognition startup, Clearview AI, which recently made headlines for creating a surveillance system made up of billions of images scraped from Facebook, Twitter and YouTube to help law enforcement trace a person’s movements.

As much as CCPA has caused difficulties, it has helped forge an entirely new class of compliance startups ready to help large and small companies alike handle the regulatory burdens to which they are subject. Several startups in the space are taking advantage of the $55 billion expected to be spent on CCPA compliance in the next year — like Segment, which gives customers a consolidated view of the data they store; Osano which helps companies comply with CCPA; and Securiti, which just raised $50 million to help expand its CCPA offering. With CCPA and GDPR under their belts, their services are designed to scale to accommodate new state or federal laws as they come in.

Another startup, Mine, which lets users “take ownership” of their data by acting as a broker to allow users to easily make requests under CCPA and GDPR, had a somewhat bumpy debut.

The service asks users to grant them access to a user’s inbox, scanning for email subject lines that contain company names and using that data to determine which companies a user can request their data from or have their data deleted. (The service requests access to a user’s Gmail but the company claims it will “never read” users’ emails.) Last month during a publicity push, Mine inadvertently copied a couple of emailed data requests to TechCrunch, allowing us to see the names and email addresses of two requesters who wanted Crunch, a popular gym chain with a similar name, to delete their data.

(Screenshot: Zack Whittaker/TechCrunch)

TechCrunch alerted Mine — and the two requesters — to the security lapse.

“This was a mix-up on our part where the engine that finds companies’ data protection offices’ addresses identified the wrong email address,” said Gal Ringel, co-founder and chief executive at Mine. “This issue was not reported during our testing phase and we’ve immediately fixed it.”

For now, many startups have caught a break.

The smaller, early-stage startups that don’t yet make $25 million in annual revenue or store the personal data on more than 50,000 users or devices will largely escape having to immediately comply with CCPA. But it doesn’t mean startups can be complacent. As early-stage companies grow, so will their legal responsibilities.

“For those who did launch these portals and offer rights to all Americans, they are in the best position to be ready for these additional states,” said Cline. “Smaller companies in some ways have an advantage for compliance if their products or services are commodities, because they can build in these controls right from the beginning,” he said.

CCPA may have gotten off to a bumpy start, but time will tell if things get easier. Just this week, California’s attorney general Xavier Becerra released newly updated guidance aimed at trying to “fine tune” the rules, per his spokesperson. It goes to show that even California’s lawmakers are still trying to get the balance right.

But with the looming threat of hefty fines just months away, time is running out for the non-compliant.