Archives

identity management

UK watchdog sets out “age appropriate” design code for online services to keep kids’ privacy safe

The UK’s data protection watchdog has today published a set of design standards for Internet services which are intended to help protect the privacy of children online.

The Information Commissioner’s Office (ICO) has been working on the Age Appropriate Design Code since the 2018 update of domestic data protection law — as part of a government push to create ‘world-leading’ standards for children when they’re online.

UK lawmakers have grown increasingly concerned about the ‘datafication’ of children when they go online and may be too young to legally consent to being tracked and profiled under existing European data protection law.

The ICO’s code is comprised of 15 standards of what it calls “age appropriate design” — which the regulator says reflects a “risk-based approach”, including stipulating that setting should be set by default to ‘high privacy’; that only the minimum amount of data needed to provide the service should be collected and retained; and that children’s data should not be shared unless there’s a reason to do so that’s in their best interests.

Profiling should also be off by default. While the code also takes aim at dark pattern UI designs that seek to manipulate user actions against their own interests, saying “nudge techniques” should not be used to “lead or encourage children to provide unnecessary personal data or weaken or turn off their privacy protections”.

“The focus is on providing default settings which ensures that children have the best possible access to online services whilst minimising data collection and use, by default,” the regulator writes in an executive summary.

While the age appropriate design code is focused on protecting children it is applies to a very broad range of online services — with the regulator noting that “the majority of online services that children use are covered” and also stipulating “this code applies if children are likely to use your service” [emphasis ours].

This means it could be applied to anything from games, to social media platforms to fitness apps to educational websites and on-demand streaming services — if they’re available to UK users.

“We consider that for a service to be ‘likely’ to be accessed [by children], the possibility of this happening needs to be more probable than not. This recognises the intention of Parliament to cover services that children use in reality, but does not extend the definition to cover all services that children could possibly access,” the ICO adds.

Here are the 15 standards in full as the regulator describes them:

  1. Best interests of the child: The best interests of the child should be a primary consideration when you design and develop online services likely to be accessed by a child.
  2. Data protection impact assessments: Undertake a DPIA to assess and mitigate risks to the rights and freedoms of children who are likely to access your service, which arise from your data processing. Take into account differing ages, capacities and development needs and ensure that your DPIA builds in compliance
    with this code.
  3. Age appropriate application: Take a risk-based approach to recognising the age of individual users and ensure you effectively apply the standards in this code to child users. Either establish age with a level of certainty that is appropriate to the risks to the rights and freedoms of children that arise from your data processing, or apply the standards in this code to all your users instead.
  4. Transparency: The privacy information you provide to users, and other published terms, policies and community standards, must be concise, prominent and in clear language suited to the age of the child. Provide additional specific ‘bite-sized’ explanations about how you use personal data at the point that use is activated.
  5. Detrimental use of data: Do not use children’s personal data in ways that have been shown to be detrimental to their wellbeing, or that go against industry codes of practice, other regulatory provisions or Government advice.
  6. Policies and community standards: Uphold your own published terms, policies and community standards (including but not limited to privacy policies, age restriction, behaviour rules and content policies).
  7. Default settings: Settings must be ‘high privacy’ by default (unless you can demonstrate a compelling reason for a different default setting, taking account of the best interests of the child).
  8. Data minimisation: Collect and retain only the minimum amount of personal data you need to provide the elements of your service in which a child is actively and knowingly engaged. Give children separate choices over which elements they wish to activate.
  9. Data sharing: Do not disclose children’s data unless you can demonstrate a compelling reason to do so, taking account of the best interests of the child.
  10. Geolocation: Switch geolocation options off by default (unless you can demonstrate a compelling reason for geolocation to be switched on by default, taking account of the best interests of the child). Provide an obvious sign for children when location tracking is active. Options which make a child’s location visible to others must default back to ‘off’ at the end of each session.
  11. Parental controls: If you provide parental controls, give the child age appropriate information about this. If your online service allows a parent or carer to monitor their child’s online activity or track their location, provide an obvious sign to the child when they are being monitored.
  12. Profiling: Switch options which use profiling ‘off’ by default (unless you can demonstrate a compelling reason for profiling to be on by default, taking account of the best interests of the child). Only allow profiling if you have appropriate measures in place to protect the child from any harmful effects (in particular, being fed content that is detrimental to their health or wellbeing).
  13. Nudge techniques: Do not use nudge techniques to lead or encourage children to provide unnecessary personal data or weaken or turn off their privacy protections.
  14. Connected toys and devices: If you provide a connected toy or device ensure you include effective tools to enable conformance to this code.
  15. Online tools: Provide prominent and accessible tools to help children exercise their data protection rights and report concerns.

The Age Appropriate Design Code also defines children as under the age of 18 — which offers a higher bar than current UK data protection law which, for example, puts only a 13-year-age limit for children to be legally able to give their consent to being tracked online.

So — assuming (very wildly) — that Internet services were to suddenly decide to follow the code to the letter, setting trackers off by default and not nudging users to weaken privacy-protecting defaults by manipulating them to give up more data, the code could — in theory — raise the level of privacy both children and adults typically get online.

However it’s not legally binding — so there’s a pretty fat chance of that.

Although the regulator does make a point of noting that the standards in the code are backed by existing data protection laws, which it does regulate and can legally enforceable — pointing out that it has powers to take action against law breakers including “tough sanctions” such as orders to stop processing data and fines of up to 4% of a company’s global turnover.

So, in a way, the regulator appears to be saying: ‘Are you feeling lucky data punk?’

Last April the UK government published a white paper setting out its proposals for regulating a range of online harms — including seeking to address concern about inappropriate material that’s available on the Internet being accessed by children.

The ICO’s Age Appropriate Design Code is intended to support that effort. So there’s also a chance that some of the same sorts of stipulations could be baked into the planned online harms bill.

“This is not, and will not be, ‘law’. It is just a code of practice,” said Neil Brown, an Internet, telecoms and tech lawyer at Decoded Legal, discussing the likely impact of the suggested standards. “It shows the direction of the ICO’s thinking, and its expectations, and the ICO has to have regard to it when it takes enforcement action but it’s not something with which an organisation needs to comply as such. They need to comply with the law, which is the GDPR [General Data Protection Regulation] and the DPA [Data Protection Act] 2018.

“The code of practice sits under the DPA 2018, so companies which are within the scope of that are likely to want to understand what it says. The DPA 2018 and the UK GDPR (the version of the GDPR which will be in place after Brexit) covers controllers established in the UK, as well as overseas controllers which target services to people in the UK or monitor the behaviour of people in the UK. Merely making a service available to people in the UK should not be sufficient.”

“Overall, this is consistent with the general direction of travel for online services, and the perception that more needs to be done to protect children online,” Brown also told us.

“Right now, online services should be working out how to comply with the GDPR, the ePrivacy rules, and any other applicable laws. The obligation to comply with those laws does not change because of today’s code of practice. Rather, the code of practice shows the ICO’s thinking on what compliance might look like (and, possibly, goldplates some of the requirements of the law too).”

Organizations that choose to take note of the code — and are in a position to be able to demonstrate they’ve followed its standards — stand a better chance of persuading the regulator they’ve complied with relevant privacy laws, per Brown.

“Conversely, if they want to say that they comply with the law but not with the code, that is (legally) possible, but might be more of a struggle in terms of engagement with the ICO,” he added.

Zooming back out, the government said last fall that it’s committed to publishing draft online harms legislation for pre-legislative scrutiny “at pace”.

But at the same time it dropped a controversial plan included in a 2017 piece of digital legislation which would have made age checks for accessing online pornography mandatory — saying it wanted to focus on a developing “the most comprehensive approach possible to protecting children”, i.e. via the online harms bill.

How comprehensive the touted ‘child protections’ will end up being remains to be seen.

Brown suggested age verification could come through as a “general requirement”, given the age verification component of the Digital Economy Act 2017 was dropped — and “the government has said that these will be swept up in the broader online harms piece”.

It has also been consulting with tech companies on possible ways to implement age verification online.

The difficulties of regulating perpetually iterating Internet services — many of which are also operated by companies based outside the UK — have been writ large for years. (And are mired in geopolitics.)

While the enforcement of existing European digital privacy laws remains, to put it politely, a work in progress

Just because it’s legal, it doesn’t mean it’s right

Companies often tout their compliance with industry standards — I’m sure you’ve seen the logos, stamps and “Privacy Shield Compliant” declarations. As we, and the FTC, were reminded a few months ago, that label does not mean that the criteria was met initially, much less years later when finally subjected to government review.

Alastair Mactaggart — an activist who helped promote the California Consumer Privacy Act (CCPA) — has threatened a ballot initiative allowing companies to voluntarily certify compliance with CCPA 2.0 to the still-unformed agency. While that kind of advertising seems like a no-brainer for companies looking to stay competitive in a market that values privacy and security, is it actually? Business considerations aside, is there a moral obligation to comply with all existing privacy laws, and is a company unethical for relying on exemptions from such laws?

I reject the notion that compliance with the law and morality are the same thing — or that one denotes the other. In reality, it’s a nuanced decision based on cost, client base, risk tolerance and other factors. Moreover, giving voluntary compliance the appearance of additional trust or altruism is actually harmful to consumers because our current system does not permit effective or timely oversight and the type of remedies available after the fact do not address the actual harms suffered.

It’s not unethical to rely on an exemption

Compliance is not tied to morality.

At its heart is a cost analysis, and a nuanced analysis at that. Privacy laws — as much as legislators want to believe otherwise — are not black and white in their implementation. Not all unregulated data collection is nefarious and not all companies that comply (voluntarily or otherwise) are purely altruistic. While penalties have a financial cost, data collection is a revenue source for many because of the knowledge and insights gained from large stores of varied data — and other companies’ need to access that data.

They balance the cost of building compliant systems and processes and amending existing agreements with often thousands of service providers with the loss of business of not being able to provide those services to consumers covered by those laws.

There is also the matter of applicable laws. Complying with a law may interfere or lessen the protections offered by the laws you follow that make you exempt in the first place, for instance, where one law prohibits you from sharing certain information for security purposes and another would require you to disclose it and make both the data and the person less secure.

Strict compliance also allows companies to rest on their laurels while taking advantage of a privacy-first reputation. The law is the minimum standard, while ethics are meant to prescribe the maximum. Complying, even with an inapplicable law, is quite literally the least the company can do. It also then puts them in a position to not make additional choices or innovate because they have already done more than what is expected. This is particularly true with technology-based laws, where legislation often lags behind the industry and its capabilities.

Moreover, who decides what is ethical varies by time, culture and power dynamics. Complying with the strict letter of a law meant to cover everyone does not take into account that companies in different industries use data differently. Companies are trying to fit into a framework without even answering the question of which framework they should voluntarily comply with. I can hear you now: “That’s easy! The one with the highest/strongest/strictest standard for collection.”  These are all adjectives that get thrown around when talking about a federal privacy law. However, “highest,” “most,” and “strongest,” are all subjective and do not live in a vacuum, especially if states start coming out with their own patchwork of privacy laws.

I’m sure there are people that say that Massachusetts — which prohibits a company from providing any details to an impacted consumer — offers the “most” consumer protection, while there is a camp that believes providing as much detailed information as possible — like California and its sample template — provides the “most” protection. Who is right? This does not even take into account that data collection can happen across multiple states. In those instances, which law would cover that individual?

Government agencies can’t currently provide sufficient oversight

Slapping a certification onto your website that you know you don’t meet has been treated as an unfair and deceptive practice by the FTC. However, the FTC generally does not have fining authority on a first-time violation. And while it can force companies to compensate consumers, damages can be very difficult to calculate.
Unfortunately, damages for privacy violations are even harder to prove in court; funds that are obtained go disproportionately to counsel, with each individual receiving a de minimis payout, if they even make it to court. The Supreme Court has indicated through their holdings in Clapper v. Amnesty Intern., USA. 133 S. Ct. 1138 (2013), and Spokeo, Inc. v. Robins, 136 S. Ct. 1540 (2016), that damages like the potential of fraud or ramifications form data loss or misuse are too speculative to have standing to maintain a lawsuit.

This puts the FTC in a weaker negotiating position to get results with as few resources expended as possible, particularly as the FTC can only do so much — it has limited jurisdiction and no control over banks or nonprofits. To echo Commissioner Noah Phillips, this won’t change without a federal privacy law that sets clear limits on data use and damages and gives the FTC greater power to enforce these limits in litigation.

Finally, in addition to these legal constraints, the FTC is understaffed in privacy, with approximately 40 full-time staff members dedicated to protecting the privacy of more than 320 million Americans. To adequately police privacy, the FTC needs more lawyers, more investigators, more technologists and state-of-the-art tech tools. Otherwise, it will continue to fund certain investigations at the cost of understaffing others.

Outsourcing oversight to a private company may not fare any better — for the simple fact that such certification will come at a high price (especially in the beginning), leaving medium and small-sized businesses at a competitive disadvantage. Further, unlike a company’s privacy professionals and legal team, a certification firm is more likely to look to compliance with the letter of the law — putting form over substance — instead of addressing the nuances of any particular business’ data use models.

Existing remedies don’t address consumer harms

Say an agency does come down with an enforcement action, the types of penalty powers that those agencies have currently do not adequately address the consumer harm. That is largely because compliance with a privacy legislation is not an on-off switch and the current regime is focused more on financial restitution.
Even where there are prescribed actions to come into compliance with the law, that compliance takes years and does not address the ramifications of historic non-compliant data use.

Take CNIL’s formal notice against Vectuary for failing to collect informed, affirmative consent. Vectuary collected geolocation data from mobile app users to provide marketing services to retailers using a consent management platform that it developed implementing the IAB (a self-regulating association) Transparency and Consent Framework. This notice warrants particular attention because Vectuary was following an established trade association guideline, and yet its consent was deemed invalid.

As a result, CNIL put Vectuary on notice to cease processing data this way and to delete data collected during that period. And while this can be counted as a victory because the decision forced the company to rebuild their systems  — how many companies would have the budget to do this, if they didn’t have the resources to comply in the first place? Further, this will take time, so what happens to their business model in the meantime? Can they continue to be non-compliant, in theory until the agency-set deadline for compliance is met? Even if the underlying data is deleted — none of the parties they shared the data with or the inferences they built on it were impacted.

The water is even murkier when you’re examining remedies for false Privacy Shield self-certification. A Privacy Shield logo on a company’s site essentially says that the company believes that its cross-border data transfers are adequately secured and the transfers are limited to parties the company believes has responsible data practices. So if a company is found to have falsely made those underlying representations (or failed to comply with another requirement), they would have to stop conducting those transfers and if that is part of how their services are provided, do they just have to stop providing those services to their customers immediately?

It seems in practice that choosing not to comply with an otherwise inapplicable law is not a matter of not caring about your customers or about moral failings, it is quite literally just “not how anything works,” nor is there any added consumer benefit in trying to — and isn’t that what counts in the end — consumers?

Opinions expressed in this article are those of the author and not of her firm, investors, clients or others.

GitGuardian raises $12M to help developers write more secure code and ‘fix’GitHub leaks

Data breaches that could cause millions of dollars in potential damages have been the bane of the life of many a company. What’s required is a great deal of real-time monitoring. The problem is that this world has become incredibly complex. A SANS Institute survey found half of company data breaches were the result of account or credential hacking.

GitGuardian has attempted to address this with a highly developer-centric cybersecurity solution.

It’s now attracted the attention of major investors, to the tune of a $12 million in Series A funding, led by Balderton Capital . Scott Chacon, co-founder of GitHub, and Solomon Hykes, founder of Docker also participated in the round.

The startup plans to use the investment from Balderton Capital to expand its customer base, predominantly in the US. Around 75% of its clients are currently based in the US, with the remainder being based in Europe, and the funding will continue to drive this expansion.

Built to uncover sensitive company information hiding in online repositories, GitGuardian says its real-time monitoring platform can address the data leaks issues. Modern enterprise software developers have to integrate multiple internal and third-party services. That means they need incredibly sensitive “secrets”, such as login details, API keys, and private cryptographic keys used to protect confidential systems and data.

GitGuardian’s systems detect thousands of credential leaks per day. The team originally built its launch platform with public GitHub in mind, however, GitGuardian is built as a private solution to monitor and notify on secrets that are inappropriately disseminated in internal systems as well, such as private code repositories or messaging systems.

Solomon Hykes, founder of Docker and investor at GitGuardian, said: “Securing your systems starts with securing your software development process. GitGuardian understands this, and they have built a pragmatic solution to an acute security problem. Their credentials monitoring system is a must-have for any serious organization”.

Do they have any competitors?

Co-founder Jérémy Thomas told me: “We currently don’t have any direct competitors. This generally means that there’s no market, or the market is too small to be interesting. In our case, our fundraise proves we’ve put our hands on something huge. So the reason we don’t have competitors is because the problem we’re solving is counterintuitive at first sight. Ask any developer, they will say they would never hardcode any secret in public source code. However, humans make mistakes and when that happens, they can be extremely serious: it can take a single leaked credential to jeopardize an entire organization. To conclude, I’d say our real competitors so far are black hat hackers. Black hat activity is real on GitHub. For two years, we’ve been monitoring organized groups of hackers that exchange sensitive information they find on the platform. We are competing with them on speed of detection and scope of vulnerabilities covered.”