Archives

information commissioner’s office

UK Council websites are letting citizens be profiled for ads, study shows

On the same day that a data ethics advisor to the UK government has urged action to regulate online targeting a study conducted by pro-privacy browser Brave has highlighted how Brits are being profiled by the behavioral ad industry when they visit their local Council’s website — perhaps seeking info on local services or guidance about benefits including potentially sensitive information related to addiction services or disabilities.

Brave found that nearly all UK Councils permit at least one company to learn about the behavior of people visiting their sites, finding that a full 409 Councils exposed some visitor data to private companies.

While many large councils (serving 300,000+ people) were found exposing site visitors to what Brave describes as “extensive tracking and data collection by private companies” — with the worst offenders, London’s Enfield and Sheffield City Councils, exposing visitors to 25 data collectors apiece.

Brave argues the findings represent a conservative illustration of how much commercial tracking and profiling of visitors is going on on public sector websites — a floor, rather than a ceiling — given it was only studying landing pages of Council sites without any user interaction, and could only pick up known trackers (nor could the study look at how data is passed between tracking and data brokering companies).

Nor is the first such study to warn that public sector websites are infested with for-profit adtech. A report last year by Cookiebot found users of public sector and government websites in the EU being tracked when they performed health-related searches — including queries related to HIV, mental health, pregnancy, alcoholism and cancer.

Brave’s study — which was carried out using the webxray tool — found that almost all (98%) of the Councils used Google systems, with the report noting that the tech giant owns all five of the top embedded elements loaded by Council websites, which it suggests gives the company a god-like view of how UK citizens are interacting with their local authorities online.

The analysis also found 198 of the Council websites use the real-time bidding (RTB) form of programmatic online advertising. This is notable because RTB is the subject of a number of data protection complaints across the European Union — including in the UK, where the Information Commissioner’s Office (ICO) itself has been warning the adtech industry for more than half a year that its current processes are in breach of data protection laws.

However the UK watchdog has preferred to bark softly in the industry’s general direction over its RTB problem, instead of taking any enforcement action — a response that’s been dubbed “disastrous” by privacy campaigners.

One of the smaller RTB players the report highlights — which calls itself the Council Advertising Network (CAN) — was found sharing people’s data from 34 Council websites with 22 companies, which could then be insecurely broadcasting it on to hundreds or more entities in the bid chain.

Slides from a CAN media pack refer to “budget conscious” direct marketing opportunities via the ability to target visitors to Council websites accessing pages about benefits, child care and free local activities; “disability” marketing opportunities via the ability to target visitors to Council websites accessing pages such as home care, blue badges and community and social services; and “key life stages” marketing  opportunities via the ability to target visitors to Council websites accessing pages related to moving home, having a baby, getting married or losing a loved one.

Brave’s report — while a clearly stated promotion for its own anti-tracking browser (given it’s a commercial player too) — should be seen in the context of the ICO’s ongoing failure to take enforcement action against RTB abuses. It’s therefore an attempt to increase pressure on the regulator to act by further illuminating a complex industry which has used a lack of transparency to shield massive rights abuses and continues to benefit from a lack of enforcement of Europe’s General Data Protection Regulation.

And a low level of public understanding of how all the pieces in the adtech chain fit together and sum to a dysfunctional whole, where public services are turned against the citizens whose taxes fund them to track and target people for exploitative ads, likely contributes to discouraging sharper regulatory action.

But, as the saying goes, sunlight disinfects.

Asked what steps he would like the regulator to take, Brave’s chief policy officer, Dr Johnny Ryan, told TechCrunch: “I want the ICO to use its powers of enforcement to end the UK’s largest data breach. That data breach continues, and two years to the day after I first blew the whistle about RTB, Simon McDougall wrote a blog post accepting Google and the IAB’s empty gestures as acts of substance. It is time for the ICO to move this over to its enforcement team, and stop wasting time.”

We’re reached out to the ICO for a response to the report’s findings.

UK Council websites are letting citizens be profiled for ads, study shows

On the same day that a data ethics advisor to the UK government has urged action to regulate online targeting a study conducted by pro-privacy browser Brave has highlighted how Brits are being profiled by the behavioral ad industry when they visit their local Council’s website — perhaps seeking info on local services or guidance about benefits including potentially sensitive information related to addiction services or disabilities.

Brave found that nearly all UK Councils permit at least one company to learn about the behavior of people visiting their sites, finding that a full 409 Councils exposed some visitor data to private companies.

While many large councils (serving 300,000+ people) were found exposing site visitors to what Brave describes as “extensive tracking and data collection by private companies” — with the worst offenders, London’s Enfield and Sheffield City Councils, exposing visitors to 25 data collectors apiece.

Brave argues the findings represent a conservative illustration of how much commercial tracking and profiling of visitors is going on on public sector websites — a floor, rather than a ceiling — given it was only studying landing pages of Council sites without any user interaction, and could only pick up known trackers (nor could the study look at how data is passed between tracking and data brokering companies).

Nor is the first such study to warn that public sector websites are infested with for-profit adtech. A report last year by Cookiebot found users of public sector and government websites in the EU being tracked when they performed health-related searches — including queries related to HIV, mental health, pregnancy, alcoholism and cancer.

Brave’s study — which was carried out using the webxray tool — found that almost all (98%) of the Councils used Google systems, with the report noting that the tech giant owns all five of the top embedded elements loaded by Council websites, which it suggests gives the company a god-like view of how UK citizens are interacting with their local authorities online.

The analysis also found 198 of the Council websites use the real-time bidding (RTB) form of programmatic online advertising. This is notable because RTB is the subject of a number of data protection complaints across the European Union — including in the UK, where the Information Commissioner’s Office (ICO) itself has been warning the adtech industry for more than half a year that its current processes are in breach of data protection laws.

However the UK watchdog has preferred to bark softly in the industry’s general direction over its RTB problem, instead of taking any enforcement action — a response that’s been dubbed “disastrous” by privacy campaigners.

One of the smaller RTB players the report highlights — which calls itself the Council Advertising Network (CAN) — was found sharing people’s data from 34 Council websites with 22 companies, which could then be insecurely broadcasting it on to hundreds or more entities in the bid chain.

Slides from a CAN media pack refer to “budget conscious” direct marketing opportunities via the ability to target visitors to Council websites accessing pages about benefits, child care and free local activities; “disability” marketing opportunities via the ability to target visitors to Council websites accessing pages such as home care, blue badges and community and social services; and “key life stages” marketing  opportunities via the ability to target visitors to Council websites accessing pages related to moving home, having a baby, getting married or losing a loved one.

Brave’s report — while a clearly stated promotion for its own anti-tracking browser (given it’s a commercial player too) — should be seen in the context of the ICO’s ongoing failure to take enforcement action against RTB abuses. It’s therefore an attempt to increase pressure on the regulator to act by further illuminating a complex industry which has used a lack of transparency to shield massive rights abuses and continues to benefit from a lack of enforcement of Europe’s General Data Protection Regulation.

And a low level of public understanding of how all the pieces in the adtech chain fit together and sum to a dysfunctional whole, where public services are turned against the citizens whose taxes fund them to track and target people for exploitative ads, likely contributes to discouraging sharper regulatory action.

But, as the saying goes, sunlight disinfects.

Asked what steps he would like the regulator to take, Brave’s chief policy officer, Dr Johnny Ryan, told TechCrunch: “I want the ICO to use its powers of enforcement to end the UK’s largest data breach. That data breach continues, and two years to the day after I first blew the whistle about RTB, Simon McDougall wrote a blog post accepting Google and the IAB’s empty gestures as acts of substance. It is time for the ICO to move this over to its enforcement team, and stop wasting time.”

We’re reached out to the ICO for a response to the report’s findings.

UK watchdog sets out “age appropriate” design code for online services to keep kids’ privacy safe

The UK’s data protection watchdog has today published a set of design standards for Internet services which are intended to help protect the privacy of children online.

The Information Commissioner’s Office (ICO) has been working on the Age Appropriate Design Code since the 2018 update of domestic data protection law — as part of a government push to create ‘world-leading’ standards for children when they’re online.

UK lawmakers have grown increasingly concerned about the ‘datafication’ of children when they go online and may be too young to legally consent to being tracked and profiled under existing European data protection law.

The ICO’s code is comprised of 15 standards of what it calls “age appropriate design” — which the regulator says reflects a “risk-based approach”, including stipulating that setting should be set by default to ‘high privacy’; that only the minimum amount of data needed to provide the service should be collected and retained; and that children’s data should not be shared unless there’s a reason to do so that’s in their best interests.

Profiling should also be off by default. While the code also takes aim at dark pattern UI designs that seek to manipulate user actions against their own interests, saying “nudge techniques” should not be used to “lead or encourage children to provide unnecessary personal data or weaken or turn off their privacy protections”.

“The focus is on providing default settings which ensures that children have the best possible access to online services whilst minimising data collection and use, by default,” the regulator writes in an executive summary.

While the age appropriate design code is focused on protecting children it is applies to a very broad range of online services — with the regulator noting that “the majority of online services that children use are covered” and also stipulating “this code applies if children are likely to use your service” [emphasis ours].

This means it could be applied to anything from games, to social media platforms to fitness apps to educational websites and on-demand streaming services — if they’re available to UK users.

“We consider that for a service to be ‘likely’ to be accessed [by children], the possibility of this happening needs to be more probable than not. This recognises the intention of Parliament to cover services that children use in reality, but does not extend the definition to cover all services that children could possibly access,” the ICO adds.

Here are the 15 standards in full as the regulator describes them:

  1. Best interests of the child: The best interests of the child should be a primary consideration when you design and develop online services likely to be accessed by a child.
  2. Data protection impact assessments: Undertake a DPIA to assess and mitigate risks to the rights and freedoms of children who are likely to access your service, which arise from your data processing. Take into account differing ages, capacities and development needs and ensure that your DPIA builds in compliance
    with this code.
  3. Age appropriate application: Take a risk-based approach to recognising the age of individual users and ensure you effectively apply the standards in this code to child users. Either establish age with a level of certainty that is appropriate to the risks to the rights and freedoms of children that arise from your data processing, or apply the standards in this code to all your users instead.
  4. Transparency: The privacy information you provide to users, and other published terms, policies and community standards, must be concise, prominent and in clear language suited to the age of the child. Provide additional specific ‘bite-sized’ explanations about how you use personal data at the point that use is activated.
  5. Detrimental use of data: Do not use children’s personal data in ways that have been shown to be detrimental to their wellbeing, or that go against industry codes of practice, other regulatory provisions or Government advice.
  6. Policies and community standards: Uphold your own published terms, policies and community standards (including but not limited to privacy policies, age restriction, behaviour rules and content policies).
  7. Default settings: Settings must be ‘high privacy’ by default (unless you can demonstrate a compelling reason for a different default setting, taking account of the best interests of the child).
  8. Data minimisation: Collect and retain only the minimum amount of personal data you need to provide the elements of your service in which a child is actively and knowingly engaged. Give children separate choices over which elements they wish to activate.
  9. Data sharing: Do not disclose children’s data unless you can demonstrate a compelling reason to do so, taking account of the best interests of the child.
  10. Geolocation: Switch geolocation options off by default (unless you can demonstrate a compelling reason for geolocation to be switched on by default, taking account of the best interests of the child). Provide an obvious sign for children when location tracking is active. Options which make a child’s location visible to others must default back to ‘off’ at the end of each session.
  11. Parental controls: If you provide parental controls, give the child age appropriate information about this. If your online service allows a parent or carer to monitor their child’s online activity or track their location, provide an obvious sign to the child when they are being monitored.
  12. Profiling: Switch options which use profiling ‘off’ by default (unless you can demonstrate a compelling reason for profiling to be on by default, taking account of the best interests of the child). Only allow profiling if you have appropriate measures in place to protect the child from any harmful effects (in particular, being fed content that is detrimental to their health or wellbeing).
  13. Nudge techniques: Do not use nudge techniques to lead or encourage children to provide unnecessary personal data or weaken or turn off their privacy protections.
  14. Connected toys and devices: If you provide a connected toy or device ensure you include effective tools to enable conformance to this code.
  15. Online tools: Provide prominent and accessible tools to help children exercise their data protection rights and report concerns.

The Age Appropriate Design Code also defines children as under the age of 18 — which offers a higher bar than current UK data protection law which, for example, puts only a 13-year-age limit for children to be legally able to give their consent to being tracked online.

So — assuming (very wildly) — that Internet services were to suddenly decide to follow the code to the letter, setting trackers off by default and not nudging users to weaken privacy-protecting defaults by manipulating them to give up more data, the code could — in theory — raise the level of privacy both children and adults typically get online.

However it’s not legally binding — so there’s a pretty fat chance of that.

Although the regulator does make a point of noting that the standards in the code are backed by existing data protection laws, which it does regulate and can legally enforceable — pointing out that it has powers to take action against law breakers including “tough sanctions” such as orders to stop processing data and fines of up to 4% of a company’s global turnover.

So, in a way, the regulator appears to be saying: ‘Are you feeling lucky data punk?’

Last April the UK government published a white paper setting out its proposals for regulating a range of online harms — including seeking to address concern about inappropriate material that’s available on the Internet being accessed by children.

The ICO’s Age Appropriate Design Code is intended to support that effort. So there’s also a chance that some of the same sorts of stipulations could be baked into the planned online harms bill.

“This is not, and will not be, ‘law’. It is just a code of practice,” said Neil Brown, an Internet, telecoms and tech lawyer at Decoded Legal, discussing the likely impact of the suggested standards. “It shows the direction of the ICO’s thinking, and its expectations, and the ICO has to have regard to it when it takes enforcement action but it’s not something with which an organisation needs to comply as such. They need to comply with the law, which is the GDPR [General Data Protection Regulation] and the DPA [Data Protection Act] 2018.

“The code of practice sits under the DPA 2018, so companies which are within the scope of that are likely to want to understand what it says. The DPA 2018 and the UK GDPR (the version of the GDPR which will be in place after Brexit) covers controllers established in the UK, as well as overseas controllers which target services to people in the UK or monitor the behaviour of people in the UK. Merely making a service available to people in the UK should not be sufficient.”

“Overall, this is consistent with the general direction of travel for online services, and the perception that more needs to be done to protect children online,” Brown also told us.

“Right now, online services should be working out how to comply with the GDPR, the ePrivacy rules, and any other applicable laws. The obligation to comply with those laws does not change because of today’s code of practice. Rather, the code of practice shows the ICO’s thinking on what compliance might look like (and, possibly, goldplates some of the requirements of the law too).”

Organizations that choose to take note of the code — and are in a position to be able to demonstrate they’ve followed its standards — stand a better chance of persuading the regulator they’ve complied with relevant privacy laws, per Brown.

“Conversely, if they want to say that they comply with the law but not with the code, that is (legally) possible, but might be more of a struggle in terms of engagement with the ICO,” he added.

Zooming back out, the government said last fall that it’s committed to publishing draft online harms legislation for pre-legislative scrutiny “at pace”.

But at the same time it dropped a controversial plan included in a 2017 piece of digital legislation which would have made age checks for accessing online pornography mandatory — saying it wanted to focus on a developing “the most comprehensive approach possible to protecting children”, i.e. via the online harms bill.

How comprehensive the touted ‘child protections’ will end up being remains to be seen.

Brown suggested age verification could come through as a “general requirement”, given the age verification component of the Digital Economy Act 2017 was dropped — and “the government has said that these will be swept up in the broader online harms piece”.

It has also been consulting with tech companies on possible ways to implement age verification online.

The difficulties of regulating perpetually iterating Internet services — many of which are also operated by companies based outside the UK — have been writ large for years. (And are mired in geopolitics.)

While the enforcement of existing European digital privacy laws remains, to put it politely, a work in progress