Archives

Written by Natasha Lomas

Meet Facebook’s latest fake

Facebook CEO Mark Zuckerberg, a 35-year-old billionaire who keeps refusing to sit in front of international parliamentarians to answer questions about his ad business’ impact on democracy and human rights around the world, has a new piece of accountability theatre to sell you: An “Oversight Board“.

Not of Facebook’s business itself. Though you’d be forgiven for thinking that’s what Facebook’s blog post is trumpeting, with the grand claim that it’s “Establishing Structure and Governance for an Independent Oversight Board”.

Referred to during the seeding stage last year, when Zuckerberg gave select face-time to podcast and TV hosts he felt comfortable would spread his conceptual gospel with a straight face, as a sort of ‘Supreme Court of Facebook’, this supplementary content decision-making body has since been outfitted in the company’s customary (for difficult topics) bloodless ‘Facebookese’ (see also “inauthentic behavior”; its choice euphemism for fake activity on its platform)

The Oversight Board is intended to sit atop the daily grind of Facebook content moderation, which takes place behind closed doors and signed NDAs, where outsourced armies of contractors are paid to eyeball the running sewer of hate, abuse and violence so actual users don’t have to, as a more visible mechanism for resolving and thus (Facebook hopes) quelling speech-related disputes.

Facebook’s one-size-fits-all content moderation policy doesn’t and can’t. There’s no such thing as a 2.2BN+ “community” — as the company prefers to refer to its globe-spanning user-base. So quite how the massive diversity of Facebook users can be meaningfully represented by the views of a last resort case review body with as few as 11 members has not yet been made clear.

“When it is fully staffed, the board is likely to be forty members. The board will increase or decrease in size as appropriate,” Facebook writes vaguely this week.

Even if it were proposing one board member per market of operation (and it’s not) that would require a single individual to meaningfully represent the diverse views of an entire country. Which would be ludicrous, as well as risking the usual political divides from styming good faith effort.

It seems most likely Facebook will seek to ensure the initial make-up of the board reflects its corporate ideology — as a US company committed to upholding freedom of expression. (It’s clearly no accident the first three words in the Oversight Board’s charter are: “Freedom of expression”.)

Anything less US-focused might risk the charter’s other clearly stated introductory position — that “free expression is paramount”.

But where will that leave international markets which have suffered the worst kinds of individual and societal harms as a consequence of Facebook’s failure to moderate hate speech, dangerous disinformation and political violence, to name a few of the myriad content scandals that dog the company wherever it goes.

Facebook needs international markets for its business to turn a profit. But you sure wouldn’t know it from its distribution of resources. Not for nothing has the company been accused of digital colonialism.

The level of harm flowing from Facebook decisions to take down or leave up certain pieces of content can be excruciatingly high. Such as in Myanmar where its platform became a conduit for hate speech-fuelled ethnic violence towards the Rohingya people and other ethnic minorities.

It’s reputational-denting failures like Myanmar — which last year led the UN to dub Facebook’s platform “a beast” — that are motivating this latest self-regulation effort. Having made its customary claim that it will do a better job of decision-making in future, Facebook is now making a show of enlisting outsiders for help.

The wider problem is Facebook has scaled so big its business is faced with a steady pipeline of tricky, controversial and at times life-threatening content moderation decisions. Decisions it claims it’s not comfortable making as a private company. Though Facebook hasn’t expressed discomfort at monetizing all this stuff. (Even though its platform has literally been used to target ads at nazis.)

Facebook’s size is humanity’s problem but of course Facebook isn’t putting it like that. Instead — coming sometime in 2020 — the company will augment its moderation processes with a lottery-level chance of a final appeal via a case referral to the Oversight Board.

The level of additional oversight here will of course be exceptionally select. This is a last resort, cherry-picked appeal layer that will only touch a fantastically tiny proportion of the content choices Facebook moderators make every second of every day — and from which real world impacts ripple out and rain down. 

“We expect the board will only hear a small number of cases at first, but over time we hope it will expand its scope and potentially include more companies across the industry as well,” Zuckerberg writes this week, managing output expectations still many months ahead of the slated kick off — before shifting focus onto the ‘future hopes’ he’s always much more comfortable talking about. 

Case selection will be guided by Facebook’s business interests, meaning the push, even here, is still for scale of impact. Facebook says cases will be selected from a pool of complaints and referrals that “have the greatest potential to guide future decisions and policies”.

The company is also giving itself the power to leapfrog general submissions by sending expedited cases directly to the board to ask for a speedy opinion. So its content questions will be prioritized. 

Incredibly, Facebook is also trying to sell this self-styled “oversight” layer as independent from Facebook.

The Oversight Board’s overtly bureaucracy branding is pepped up in Facebook headline spin as “an Independent Oversight Board”. Although the adjective is curiously absent from other headings in Facebook’s already sprawling literature about the OB. Including the newly released charter which specifies the board’s authority, scope and procedures, and was published this week.

The nine-page document was accompanied by a letter from Zuckerberg in which he opines on “Facebook’s commitment to the Oversight Board”, as his header puts it — also dropping the word ‘independent’ in favor of slipping into a comfortable familiar case. Funny that.

The body text of Zuckerberg’s letter goes on to make several references to the board as “independent”; an “independent organization”; exercising “its independent judgement”. But here that’s essentially just Mark’s opinion.

The elephant in the room — which, if we continue the metaphor, is in the process of being dressed by Facebook in a fancy costume that attempts to make it look like, well, a board room table — is the supreme leader’s ongoing failure to submit himself and his decisions to any meaningful oversight.

Supreme leader is an accurate descriptor for Zuckerberg as Facebook CEO, given the share structure and voting rights he has afforded himself mean no one other than Zuckerberg can sack Zuckerberg. (Asked last year, during a podcast interview with recode’s Kara Swisher if he was going to fire himself, in light of myriad speech scandals on his platform, Zuckerberg laughed and then declined.)

It’s a corporate governance dictatorship that has allowed Facebook’s boy king to wield vast power around the world without any internal checks. Power without moral responsibility if you will.

Throughout Zuckerberg’s (now) 15-year apology tour turn as Facebook CEO neither the claims he’ll do things differently next time nor the cool expansionist ambition have wavered. He’s still at it of course; with a plan for a global digital currency (Libra), while bullishly colonizing literal hook-ups (Facebook Dating). Anything to keep the data and ad dollars flowing.

Recently Facebook also paid a $5BN FTC fine to avoid its senior executives having to face questions about their data governance and policy enforcement fuck-ups — leaving Zuckerberg & co free to get back to lucrative privacy-screwing business as usual. (To put the fine in context, Facebook’s 2018 full year revenue clocked in at $55.8BN.)

All of which is to say that an ‘independent’ Facebook-devised “Oversight Board” is just a high gloss sticking plaster to cover the lack of actual regulation — internal and external — of Zuckerberg’s empire.

It is also an attempt by Facebook to paper over its continued evasion of democratic accountability. To distract from the fact its ad platform is playing fast and loose with people’s rights and lives; reshaping democracies and communities while Facebook’s founder refuses to answer parliamentarians’ questions or account for scandal-hit business decisions. Privacy is never dead for Mark Zuckerberg.

Evasion is actually a little tame a term. How Facebook operates is far more actively hostile than that. Its platform is reshaping us without accountability or oversight, even as it ploughs profits into spinning and shape-shifting its business in a bid to prevent our democratically elected representatives from being able to reshape it.

Zuckerberg appropriating the language of civic oversight and jurisprudence for this “project”, as his letter calls the Oversight Board — committing to abide by the terms of a content decision-making review vehicle entirely of his own devising, whose Facebook-written charter stipulates it will “review and decide on content in accordance with Facebook’s content policies and values” — is hardly news. Even though Facebook is spinning at the very highest level to try to make it so.

What would constitute a newsworthy shock is Facebook’s CEO agreeing to take questions from the democratically elected representatives of the billions of users of his products who live outside the US.

Zuckerberg agreeing to meet with parliamentarians around the world so they can put to him questions and concerns on a rolling and regular basis would be a truly incredible news flash.

Instead it’s fiction. That’s not how the empire functions.

The Facebook CEO has instead ducked as much democratic scrutiny as a billionaire in charge of a historically unprecedented disinformation machine possibly can — submitting himself to an awkward question-dodging turn in Congress last year; and one fixed-format meeting of the EU parliament’s conference of presidents, initially set to take place behind closed doors (until MEPs protested), where he was heckled for failing to answer questions.

He has also, most recently, pressed US president Donald Trump’s flesh. We can only speculate on how that meeting of minds went. Power meet irresponsibility — or was it vice versa?

 

International parliamentarians trying on behalf of the vast majority of the world’s Facebook users to scrutinize Zuckerberg and hold his advertising business to democratic account have, meanwhile, been roundly snubbed.

Just this month Zuckerberg declined a third invitation to speak in front of the International Grand Committee on Disinformation which will convene in Dublin this November.

At a second meeting in Canada earlier this year Zuckerberg and COO Sheryl Sandberg both refused to appear — leading the Canadian parliament’s ethics committee to vote to subpoena the pair.

While, last year, the UK parliament got so frustrated with Facebook’s evasive behavior during a timely enquiry into online disinformation, which saw its questions fobbed off by a parade of Zuckerberg stand-ins armed with spin and misdirection, that a sort of intergovernmental alchemy occurred — and the International Grand Committee on Disinformation was formed in an eye-blink, bringing multiple parliaments together to apply democratic pressure to Facebook. 

The UK Digital, Culture, Media and Sport committee’s frustration at Facebook’s evasive behavior also led it to deploy arcane parliamentary powers to seize a cache of internal Facebook documents from a US lawsuit in a creative attempt to get at the world-view locked inside Zuckerberg’s blue box.

The unvarnished glimpse of Facebook’s business that these papers afforded certainly isn’t pretty… 

US legal discovery appears to be the only reliable external force capable of extracting data from inside the bellow of the nation-sized beast. That’s a problem for democracies. 

So Facebook instructing an ‘oversight board’ of its own making to do anything other than smooth publicity bumps in the road, and pave the way for more Facebook business as usual, is like asking a Koch brothers funded ‘stink tank’ to be independent of fossil fuel interests. The OB is just Facebook’s latest crisis PR tool. More fool anyone who signs up to ink their name to its democratically void rubberstamp.

Dig into the detail of the charter and cracks in the claimed “independence” soon appear.

Aside from the obvious overriding existential points that the board only exists because Facebook exists, making it a dependent function of Facebook whose purpose is to enable its spawning parental system to continue operating; and that it’s funded and charged with chartered purpose by the very same blue-veined god it’s simultaneously supposed to be overseeing (quite the conflict of interest), the charter states that Facebook itself will choose the initial board members. Who will then choose the rest of the first cohort of members.

“To support the initial formation of the board, Facebook will select a group of cochairs. The co-chairs and Facebook will then jointly select candidates for the remainder of the board seats,” it writes in pale grey Facebookese with a tone set to ‘smooth reassurance’ — when the substance of what’s being said should really make you go ‘wtf, how is that even slightly independent?!’

Because the inaugural (Facebook-approved) member cohort will be responsible for the formative case selections — which means they’ll be laying down the foundational ‘case law’ that the board is also bound, per Facebook’s charter, to follow thereafter.

“For each decision, any prior board decisions will have precedential value and should be viewed as highly persuasive when the facts, applicable policies, or other factors are substantially similar,” runs an instructive section on the “basis of decision-making”.

The problem here hardly needs spelling out. This isn’t Facebook changing, this is more of the same ‘Facebook first’ ethos which has always driven its content moderation decisions — just now with a highly polished ‘overseen’ sheen.

This isn’t accountability either. It’s Facebook trying to protect its business from actual regulation by creating a blame-shifting firewall to shield its transparency-phobic execs from democratic (and moral) scrutiny. And indeed to shield Zuckerberg & his inner circle from future content scandals that might threaten to rock the throne, a la Cambridge Analytica.

(Judging by other events this week that mission may not be going so well… )

Given the lengths this company is going to to eschew democratic scrutiny — ducking and diving even as it weaves its own faux oversight structure to manage negative PR on its behalf (yep, more fakes!) — you really have to wonder what Facebook is trying to hide.

A moral vacuum the size of a black hole? Or perhaps it’s just trying to buy time to complete its corporate takeover of the democratic world order…

Because of course the Oversight Board can’t set actual Facebook policy. Don’t be ridiculous! It can merely issue policy recommendations — which Facebook can just choose to ignore.

So even if we imagine the OB running years in the future, when it might theoretically be possible its membership has drifted out of Facebook’s comfortable set-up “support” zone, the charter has baked in another firewall that lets Zuckerberg ignore any policy pressure he doesn’t like. Just, y’know, on the off-chance the board gets too independently minded. Truly, there’s nothing to see here.

Entities structured by corporate interests to role-play ‘neutral’ advice or ensure ‘transparent’ oversight — or indeed to promulgate self-interested propaganda dressed in the garb of intellectual expertise — are almost always a stacked trick.

This is why it’s preferable to live in a democracy. And be governed by democratically accountable institutions that are bound by legally enforcement standards of transparency. Though Facebook hopes you’ll be persuaded to vote for manipulation by corporate interest instead.

So while Facebook’s claim that the Oversight Board will operate “transparently” sure sound good it’s also entirely meaningless. These are not legal standards of transparency. Facebook is a business, not a democracy. There are no legal binds here. It’s self regulation. Ergo, a pantomime.

You can see why Facebook avoided actually calling the OB its ‘Supreme Court’; that would have been trolling a little too close to the bone.

Without legal standards of transparency (or indeed democratic accountability) being applied, there are endless opportunities for Facebook’s self interest to infiltrate the claimed separation between oversight board, oversight trust and the rest of its business; to shape and influence case selections, decisions and policy recommendations; and to seed and steer narrative-shaping discussion around hot button speech issues which could help move the angry chatter along — all under the carefully spun cover of ‘independent external oversight’.

No one should be fooled into thinking a Facebook-shaped and funded entity can meaningful hold Facebook to account on anything. Nor, in this case, when it’s been devised to absorb the flak on irreconcilable speech conflicts so Facebook doesn’t have to.

It’s highly doubtful that even a truly independent board cohort slotted into this Zuckerberg PR vehicle could meaningfully influence Facebook’s policy in a more humanitarian direction. Not while its business model is based on mass-scale attention harvesting and privacy-hostile people profiling. The board’s policy recommendations would have to demand a new business model. (To which we already know Facebook’s response: ‘LOL! No.’)

The Oversight Board is just the latest blame-shifting publicity exercise from a company with a user-base as big as a country that gifts it massive resource to throw at its ‘PR problem’ (as Facebook sees it); i.e. how to seem like a good corporate citizen whilst doing everything possible to evade democratic scrutiny and outrun the leash of government regulation. tl;dr: You can’t fix anything if you don’t believe there’s an underlying problem in the first place.

For an example of how the views of a few hand-picked independent experts can be channeled to further a particular corporate agenda look no further than the panel of outsiders Google assembled in Europe in 2014 in response to the European Court of Justice ‘right to be forgotten’ ruling — an unappealable legal decision that ran counter to its business interests.

Google used what it billed as an “advisory committee” of outsiders mostly as a publicity vehicle, holding a large number of public ‘hearings’ where it got to frame a debate and lobby loudly against the law. In such a context Google’s nakedly self-interested critique of EU privacy rights was lent a learned, regionally seasoned dressing of nuanced academic concern, thanks to the outsiders doing time on its platform.

Google also claimed the panel would steer its decision-making process on how to implement the ruling. And in their final report the committee ended up aligning with Google’s preference to only carry out search de-indexing at the European (rather than .com global) domain level. Their full report did contain some dissent. But Google’s preferred policy position won out. (And, yes, there were good people on that Google-devised panel.)

Facebook’s Oversight Board is another such self-interested tech giant stunt. One where Facebook gets to choose whether or not to outsource a few tricky content decisions while making a big show of seeming outward-looking, even as it works to shift and defuse public and political attention from its ongoing lack of democratic accountability.

What’s perhaps most egregious about this latest Facebook charade is it seems intended to shift attention off of the thousands of people Facebook pays to labor daily at the raw coal face of its content business. An outsourced army of voiceless workers who are tasked with moderating at high speed the very worst stuff that’s uploaded to Facebook — exposing themselves to psychological stress, emotional trauma and worse, per multiple media reports.

Why isn’t Facebook announcing a committee to provide that existing expert workforce with a public voice on where its content lines should lie, as well as the power to issue policy recommendations?

It’s impossible to imagine Facebook actively supporting Oversight Board members being selected from among the pool of content moderation contractors it already pays to stop humanity shutting its business down in sheer horror at what’s bubbling up the pipe.

On member qualifications, the Oversight Board charter states: “Members must have demonstrated experience at deliberating thoughtfully and as an open-minded contributor on a team; be skilled at making and explaining decisions based on a set of policies or standards; and have familiarity with matters relating to digital content and governance, including free expression, civic discourse, safety, privacy and technology.”

There’s surely not a Facebook moderator in the whole wide world who couldn’t already lay claim to that skill-set. So perhaps it’s no wonder the company’s ‘Oversight Board’ isn’t taking applications.

Google completes controversial takeover of DeepMind Health

Google has completed a controversial take-over of the health division of its UK AI acquisition, DeepMind.

The personnel move had been delayed as National Health Service (NHS) trusts considered whether to shift their existing DeepMind contracts — some for a clinical task management app, others involving predictive health AI research — to Google.

In a blog post yesterday Dr Dominic King, formerly of DeepMind (and the NHS), now UK site lead at Google Health, confirmed the transfer, writing: “It’s clear that a transition like this takes time. Health data is sensitive, and we gave proper time and care to make sure that we had the full consent and cooperation of our partners. This included giving them the time to ask questions and fully understand our plans and to choose whether to continue our partnerships. As has always been the case, our partners are in full control of all patient data and we will only use patient data to help improve care, under their oversight and instructions.”

The Royal Free NHS Trust, Taunton & Somerset NHS Foundation Trust, Imperial College Healthcare NHS Trust, Moorfields Eye Hospital NHS Foundation Trust and University College London Hospitals NHS Foundation Trust all put out statements yesterday confirming they have moved their contractual arrangements to Google.

In the case of the Royal Free, patients’ Streams data is moving to the Google Cloud Platform infrastructure to support expanding use of the app which surfaces alerts for a kidney condition to another of its hospitals (Barnet Hospital).

One NHS trust, Yeovil District Hospital NHS Foundation Trust, has not signed a new contract — and says it had never deployed Streams, suggesting it had not found a satisfactory way to integrate the app with its existing ways of working — instead taking the decision to terminate the arrangement. Though it’s leaving the door open to future health service provision from Google.

A spokeswoman for Yeovil hospital sent us this statement:

We began our relationship with DeepMind in 2017 and since then have been determining what part the Streams application could play in clinical decision making here at Yeovil Hospital.

The app was never operationalised, and no patient data was processed.

What’s key for us as a hospital, when it comes to considering the implementation of any new piece of technology, is whether it improves the effectiveness and safety of patient care and how it tessellates with existing ways of working. Working with the DeepMind team, we found that Streams is not necessary for our organisation at the current time.

Whilst our contractual relationship has ended, we will remain an anchor partner of Google Health so will continue to be part of conversations about emerging technology which may be of benefit to our patients and our clinician in the future.

The hand-off of DeepMind Health to Google, which was announced just over a year ago, means the tech giant is now directly providing software services to a number of NHS trusts that had signed contracts with DeepMind for Streams; as well as taking over several AI research partnerships that involve the use of NHS patients’ data to try to develop predictive diagnostic models using AI technology.

DeepMind — which kicked off its health efforts by signing an agreement with the Royal Free NHS Trust in 2015, going on to publicly announce the health division in spring 2016 — said last year its future focus would be as a “research organisation”.

As recently as this July DeepMind was also touting a predictive healthcare research “breakthrough” — announcing it had trained a deep learning model for continuously predicting the future likelihood of a patient developing a life-threatening condition called acute kidney injury. (Though the AI is trained on heavily gender-skewed data from the US department of Veteran Affairs.)

Yet it’s now become clear that it’s handed off several of its key NHS research partnerships to Google Health as part of the Streams transfer.

In its statement about the move yesterday, UCLH writes that “it was proposed” that its DeepMind research partnership — which is related to radiotherapy treatment for patients with head and neck cancer — be transferred to Google Health, saying this will enable it to “make use of Google’s scale and experience to deliver potential breakthroughs to patients more rapidly”.

“We will retain control over the anonymised data and remain responsible for deciding how it is used,” it adds. “The anonymised data is encrypted and only accessible to a limited number of researchers who are working on this project with UCLH’s permission. Access to the data will only be granted for officially approved research purposes and will be automatically audited and logged.”

It’s worth pointing out that the notion of “anonymised” high dimension health data should be treated with a healthy degree of scepticism — given the risk of re-identification.

Moorfields also identifies Google’s “resources” as the incentive for agreeing for its eye-scan related research partnership to be handed off, writing: “This updated partnership will allow us to draw on Google’s resources and expertise to extend the benefits of innovations that AI offers to more of our clinicians and patients.”

Quite where this leaves DeepMind’s ambitions to “lead the way in fundamental research applying AI to important science and medical research questions, in collaboration with academic partners, to accelerate scientific progress for the benefit of everyone”, as it put it last year — when it characterized the hand-off to Google Health as all about ‘scaling Streams’ — remains to be seen.

We’ve reached out to DeepMind for comment on that.

Co-founder Mustafa Suleyman, who’s been taking a leave of absence from the company, tweeted yesterday to congratulate the Google Health team.

DeepMind’s NHS research contracts also transferring to Google Health suggests the tech giants wants zero separation between core AI health research and the means of application, using its own cloud infrastructure, of any promising models it’s able to train off of patient data and commercialize by selling to the same healthcare services providers as apps and services.

You could say Google is seeking to bundle access to the high resolution patient data that’s essential for developing health AIs with the provision of commercial digital healthcare services it hopes to sell hospitals down the line, all funnelled through the same Google cloud infrastructure.

As we reported at the time, the hand-off of DeepMind Health to Google is controversial.

Firstly because the trust that partnered with DeepMind in 2015 to develop Streams was later found by the UK’s data protection watchdog to have breached UK law. The ICO said there was no legal basis for the Royal Free to have shared the medical records of ~1.6M patients with DeepMind during the app’s development.

Despite concerns being raised over the legal basis for sharing patients’ data throughout 2016 and 2017 DeepMind continued inking NHS contracts for Streams — claiming at the time that patient data would never be handed to Google. Yet fast forward a couple of years and it’s now literally sitting on the tech giant’s servers.

It’s that U-turn that led the DeepMind to Google Health hand-off to be branded a trust demolition by legal experts when the news was announced last year.

This summer the UK’s patient data watchdog, the National Data Guardian, released correspondence between her office and the ICO which informed the latter’s 2017 finding that Streams had breached data protection law — in which she articulates a clear regulatory position that the “reasonable expectations” of patients must govern non-direct care uses for people’s health data, rather than healthcare providers relying on doctors to decide whether they think the intended purpose for people’s medical information is justified.

The Google Health blog post talks a lot about “patient care” and “patient data” but has nothing to say about patients’ expectations of how their personal information should be used, with King writing that “our partners are in full control of all patient data and we will only use patient data to help improve care, under their oversight and instructions”.

It was exactly such an ethical blindspot around the patient’s perspective that led Royal Free doctors to override considerations about people’s medical privacy in the rush to throw their lot in with Google-DeepMind and scramble for AI-fuelled predictive healthcare.

Patient consent was not sought for passing medical records then; nor have patients’ views been consulted in the transfer of Streams contracts (and people’s data) to Google now.

And while — after it was faced with public outcry over the NHS data it was processing — DeepMind did go on to publish its contracts with NHS trusts (with some redactions), Google Health is not offering any such transparency on the replacement contracts that have been inked now. So it’s not clear whether there have been any other changes to the terms. Patients have to take all that on trust.

We reached out to the Royal Free Trust with questions about the new contract with Google but a spokeswoman just pointed us to the statement on its website — where it writes: “All migration and implementation will be completed to the highest standards of security and will be compliant with relevant data protection legislation and NHS information governance requirements.”

“As with all of our arrangements with third parties, the Royal Free London remains the data controller in relation to all personal data. This means we retain control over that personal data at all times and are responsible for deciding how that data is used for the benefit of patient care,” it adds.

In another reduction in transparency accompanying this hand-off from DeepMind to Google Health, an independent panel of reviewers that DeepMind appointed to oversee its work with the NHS in another bid to boost trust has been disbanded.

“As we announced in November, that review structure — which worked for a UK entity primarily focused on finding and developing healthcare solutions with and for the NHS — is not the right structure for a global effort set to work across continents as well as different health services,” King confirmed yesterday.

In its annual report last year the panel had warned of the risk of DeepMind exerting “excessive monopoly power” as a result of the data access and streaming infrastructure bundled with provision of the Streams app. For DeepMind then read Google now.

Independent experts raising concerns about monopoly power unsurprisingly doesn’t align with Google’s global ambitions in future healthcare provision.

The last word from the independent reviewers is a Medium post penned by former chair, professor Donal O’Donoghue — who writes that he’s “disappointed that the IR experiment did not have the time to run its course and I am sad to say goodbye to a project I’ve found fascinating”.

“This was a fascinating exploration into how a new governance model could be applied to such an important area such as health,” he adds. “It’s hard to know how this would have developed over the years but… what is clear to me is that trust and transparency are of paramount importance in healthcare and I’m keen to see how Google Health, and other providers, deliver this in the future.”

But with trust demolished and transparency reduced Google Health appears to have learnt exactly nothing from DeepMind’s missteps.

Private search engine Qwant’s new CEO is Mozilla Europe veteran Tristan Nitot

French startup Qwant, whose non-tracking search engine has been gaining traction in its home market as a privacy-respecting alternative to Google, has made a change to its senior leadership team as it gears up for the next phase of growth.

Former Mozilla Europe president, Tristan Nitot, who joined Qwant last year as VP of advocacy, has been promoted to chief executive, taking over from François Messager — who also joined in 2018 but is now leaving the business. Qwant co-founder, Eric Leandri, meanwhile, continues in the same role as president.

Nitot, an Internet veteran who worked at Netscape and helped to found Mozilla Europe in 1998, where he later served as president and stayed until 2015 before leaving to write a book on surveillance, brings a wealth of experience in product and comms roles, as well as open source.

Most recently he spent several years working for personal cloud startup, Cozy Cloud.

“I’m basically here to help [Leandri] grow the company and structure the company,” Nitot tells TechCrunch, describing Qwant’s founder as an “amazing entrepreneur, audacious and visionary”.

Market headwinds have been improving for the privacy-focused Google rival in recent years as concern about foreign data-mining tech giants has stepped up in Europe.

Last year the French government announced it would be switching its search default from Google to Qwant. Buying homegrown digital tech now apparently seen as a savvy product choice as well as good politics.

Meanwhile antitrust attention on dominant search giant Google, both at home and abroad, has led to policy shifts that directly benefit search rivals — such as an update of the default lists baked into its chromium engine which was quietly put out earlier this year.

That behind the scenes change saw Qwant added as an option for users in the French market for the first time. (On hearing the news a sardonic Leandri thanked Google — but suggested Qwant users choose Firefox or the Brave browser for a less creepy web browsing experience.)

“A lot of companies and institutions have decided and have realized basically that they’ve been using a search engine which is not European. Which collects data. Massively. And that makes them uncomfortable,” says Nitot. “They haven’t made a conscious decision about that. Because they bring in a computer which has a browser which has a search engine in it set by default — and in the end you just don’t get to choose which search engine your people use, right.

“And so they’re making a conscious decision to switch to Qwant. And we’ve been spending a lot of time and energy on that — and it’s paying off big time.”

As well as the French administration’s circa 3M desktops being switched by default to Qwant (which it expects will be done this quarter), the pro-privacy search engine has been getting traction from other government departments and regional government, as well as large banks and schools, according to Nitot.

He credits a focus on search products for schoolkids with generating momentum, such as Qwant Junior, which is designed for kids aged 6-12, and excludes sex and violence from search results as well as being ad free. (It’s set to get an update in the next few weeks.) It has also just been supplemented by Qwant School: A school search product aimed at 13-17 year olds.

“All of that creates more users — the kids talk to their parents about Qwant Junior, and the parents install Qwant.com for them. So there’s a lot of momentum creating that growth,” Nitot suggests.

Qwant says it handled more than 18 billion search requests in 2018.

A growing business needs money to fuel it of course. So fundraising efforts involving convertible bonds is one area Nitot says he’ll be focused on in the new role. “We are raising money,” he confirms.

Increasing efficiency — especially on the engineering front — is another key focus for the new CEO.

“The rest will be a focus on the organization, per se, how we structure the organization. How we evolve the company culture. To enable or to improve delivery of the engineering team, for example,” he says. “It’s not that it’s bad it’s just that we need to make sure every dollar or every euro we invest gives as much as possible in return.”

Product wise, Nitot’s attention in the near term will be directed towards shipping a new version of Qwant’s search engine that will involve reengineering core tech to improve the quality of results.

“What we want to do [with v2] is to improve the quality of the results,” he says of the core search product. “You won’t be able to notice any difference, in terms of quality, with the other really good search engines that you may use — except that you know that your privacy is respected by Qwant.

“[As we raise more funding] we will be able to have a lot more infrastructure to run better and more powerful algorithms. And so we plan to improve that internationally… Every language will benefit from the new search engine. It’s also a matter of money and infrastructure to make this work on a web scale. Because the web is huge and it’s growing.

“The new version includes NLP (Natural Language Processing) technology… for understanding language, for understanding intentions — for example do you want to buy something or are you looking for a reference… or a place or a thing. That’s the kind of thing we’re putting in place but it’s going to improve a lot for every language involved.”

Western Europe will be the focus for v2 of the search engine, starting with French, German, Italian, Spanish and English — with a plan to “go beyond that later on”.

Nitot also says there will also be staggered rollouts (starting with France), with Qwant planning to run old and new versions in parallel to quality check the new version before finally switching users over.

“Shipping is hard as we used to say at Mozilla,” he remarks, refusing to be fixed to a launch date for v2 (beyond saying it’ll arrive in “less than a year”). “It’s a universal rule; shipping a new product is hard, and that’s what we want to do with version 2… I’ve been writing software since 1980 and so I know how predictions are when it comes to software release dates. So I’m very careful not to make promises.”

Developing more of its own advertising technologies is another focus for Qwant. On this front the aim is to improve margins by leaning less on partners like Microsoft .

“We’ve been working with partners until now, especially on the search engine result pages,” says Nitot. “We put Microsoft advertising on it. And our goal is to ramp up advertising technologies so that we rely on our own technologies — something that we control. And that hopefully will bring a better return.”

Like Google, Qwant monetizes searches by serving ads alongside results. But unlike Google these are contextual ads, meaning they are based on general location plus the substance of the search itself; rather than targeted ads which entail persistent tracking and profiling of Internet users in order to inform the choice of ad (hence feeling like ads are stalking you around the Internet).

Serving contextual ads is a choice that lets Qwant offer a credible privacy pledge that Mountain View simply can’t match.

Yet up until 2006 Google also served contextual ads, as Nitot points out, before its slide into privacy-hostile microtargeting. “It’s a good old idea,” he argues of contextual ads. “We’re using it. We think it really is a valuable idea.” 

Qwant is also working on privacy-sensitive ad tech. One area of current work there is personalization. It’s developing a client-side, browser-based encrypted data store, called Masq, that’s intended to store and retrieve application data through a WebSocket connection. (Here’s the project Masq Github page.)

“Because we do not know the person that’s using the product it’s hard to make personalization of course. So we plan to do personalization of the product on the client side,” he explains. “Which means the server side will have no more details than we currently do, but on the client side we are producing something which is open source, which stores data locally on your device — whether that’s a laptop or smartphone — in the browser, it is encrypted so that nobody can reuse it unless you decide that you want that to happen.

“And it’s open source so that it’s transparent and can be audited and so that people can trust the technology because it runs on their own device, it stores on their device.”

“Right now it’s at alpha stage,” Nitot adds of Masq, declining to specify when exactly it might be ready for a wider launch.

The new CEO’s ultimate goal for Qwant is to become the search engine for Europe — a hugely ambitious target that remains far out of reach for now, with Google still commanding in excess of 90% regional marketshare. (A dominance that has got its business embroiled in antitrust hot water in Europe.)

Yet the Internet of today is not the same as the Internet of yesterday when Netscape was a browsing staple — until Internet Explorer knocked it off its perch after Microsoft bundled its rival upstart as the default browser on Windows. And the rest, as they say, is Internet history.

Much has changed and much is changing. But abuses of market power are an old story. And as regulators act against today’s self-interested defaults there are savvy alternatives like Qwant primed and waiting to offer consumers a different kind of value.

“Qwant is created in Europe for the European citizens with European values,” says Nitot. “Privacy being one of these values that are central to our mission. It is not random that the CNIL — the French data protection authority — was created in France in 1978. It was the first time that something like that was created. And then GDPR [General Data Protection Regulation] was created in Europe. It doesn’t happen by accident. It’s a matter of values and the way people see their life and things around them, politics and all that. We have a very deep concern about privacy in France. It’s written in the European declaration of human rights.

“We build a product that reflects those values — so it’s appealing to European users.”