Meta targeted for fresh UK gov't warning against E2E encryption for Messenger, Instagram


Buckle up for another encryption fight: Hot on the heels of securing parliament’s approval for its Online Safety Bill yesterday, the U.K. government is amping up pressure on Meta not to roll out end-to-end-encryption (E2EE) on Facebook Messenger and Instagram — unless it applies unspecified “safety measures” which the Home Secretary said should allow law enforcement to continue to detect child sexual abuse material (CSAM) at the same time as protecting user privacy.

In an interview on BBC Radio 4’s Today Program this morning, Suella Braverman claimed the vast majority of online child sexual abuse activity that U.K. law enforcement is currently able to detect is taking place on Facebook Messenger and Instagram. She then hit out at Meta’s proposal to expand its use of E2EE “without safety measures” to the two services — arguing the move would “disable and prohibit law enforcement agencies from accessing this criminal activity [i.e. CSAM]”.

The social media giant has previously suggested it would implement strong encryption across all its apps by the end of 2023. And — this year — it’s been ramping up testing. Although friction from policymakers has clearly made the “pivot to privacy” which founder Mark Zuckerberg announced all the way back in 2019, when he said the company would universally apply E2EE on its services, slow going.

Finally, though, this August, Meta announced it would enable E2EE by default for Messenger by the end of the year. But that plan is facing renewed attacks from the U.K. government — newly armed with the big stick of legal duties incoming via the Online Safety Bill.

Experts have been warning for years that surveillance powers in the legislation pose a risk to E2EE. But policymakers didn’t listen — all we got was a last minute fudge. That means platforms like Meta and U.K. web users are faced with another round of crypto warring.

Behind closed doors, we understand, ministers have not been asserting their faith in the existence of Braverman’s claimed privacy-safe E2EE safety measures — and, indeed, ministerial remarks earlier this month were widely interpreted to signify the government was pulling back on a clash with major tech firms over encryption (a number of which have warned they will pull services from the U.K. rather than torch user security) — so the threatening noises coming out of the Home Office this morning have an aura of political theatre.

But with the security and privacy of millions of web users repurposed for another kicking there’s nothing to enjoy in the curtain going up on another act of this familiar — and apparently endless — old power play.

“End-to-end encryption with safety measures”

Asked by the BBC what the government would do if Meta goes ahead with its E2EE rollout without the additional measures she wants, Braverman confirmed Ofcom has powers to fine Meta up to 10% of its global annual turnover if it fails to comply with the Online Safety Bill. Again, though, she stressed the government hopes to “work constructively” with the company to apply “end-to-end encryption with safety measures”, as she put it.

“My job is fundamentally to protect children not paedophiles, and I want to work with Meta so that they roll out the technology that enables that objective to be realised. That protects children but also protects their commercial interests,” she said. “We know that technology exists. We’ve also just passed our landmark legislation in the form of the Online Safety Bill that does give us new and extensive powers to if necessary, via Ofcom, direct the social media companies to take necessary steps to remove indecent material, to roll out technology and to take the necessary steps to safeguard children.”

Pressed on what she would do if Meta doesn’t do what the government demands, Braverman said that — “ultimately, and potentially, and if necessary, and proportionate” — Meta could face sanctions under the Online Safety Bill. But she reiterated her “clear preference” is to work “constructively with them”.

“In the first instance, we believe the technology exists. The Internet Watch Foundation agrees. The NSPCC agrees,” she went on, making another reference to undefined “safety measures” she wants Meta to apply. “Tech leaders, tech industry organisations have developed the technology — it’s now on Meta to do the right thing, to work with us in the interest of child safety to prevent their social media platforms from being safe havens for paedophiles. And to roll out this technology that will safeguard children but also protect user privacy.”

While the Home Secretary did not specify what “safety measures” the government is referring to, new Home Office guidance on E2EE suggests ministers want Meta to implement similar hash matching technologies for detecting CSAM that it has been using for years — but on non-E2EE services.

Applying content scanning technologies to strongly encrypted content where only the sender and recipient hold encryption keys is a whole different kettle of fish, to put it politely. Security and privacy experts are therefore concerned the government push for “safety tech” will lead, via powered contained in the Online Safety Bill, to Ofcom mandating that E2EE platforms bake client side scanning technology into their systems — a move scores of experts have warned will risk the security and privacy of millions of web users.

The Home Office document does not spell out how to square this circle but it points to a “Safety Tech” challenge the Home Office ran back in 2021 — when it tossed public money toward the development of “proof-of-concept” CSAM detection technologies which, it suggested, could be applied to E2EE “whilst upholding user privacy” — with the guidance claiming: “The fund demonstrated that it would be technically feasible.”

A spokesperson for the Department of Science, Innovation and Technology, which has been steering the Online Safety Bill, also told us:

Our Safety Tech Challenge fund has shown this technology can exist, which is why we’re calling on social media companies to use their vast capacity for innovation to build on these concepts and find solutions that work for their platforms — so children are kept safe while maintaining user privacy.

Yesterday our landmark Online Safety Bill was passed through Parliament, meaning as a last resort, on a case by case basis and only when stringent privacy safeguards have been met, Ofcom can direct companies to either use, or make best efforts to develop or source, technology to identify and remove illegal child sexual abuse content.

We contacted the Home Office to ask which safety measures Braverman is referring to — and whether the government is directing Meta to apply client-side scanning to Messenger and Instagram. A Home Office spokeswoman did not provide a straight answer but pointed back to this Safety Tech challenge — reiterating the Home Office’s claim the fund demonstrated scanning in a privacy-safe manner would be “technically feasible”.

The problem for Braverman and the government is that security and privacy experts dispute that claim.

Awais Rashid, professor of cyber security at the University of Bristol and director of the Rephrain Centre — which was appointed to independently evaluate the projects that participated in the Home Office’s Safety Tech Challenge — warned in July that none of the technology is fit for purpose, writing then: “The issue is that the technology being discussed is not fit as a solution. Our evaluation shows that the solutions under consideration will compromise privacy at large and have no built-in safeguards to stop repurposing of such technologies for monitoring any personal communications.”

Reached for a response to Braverman’s latest comments, including her claim that technology already exists which can both scan messages for illegal content without harming user privacy, Rashid reiterated his warning that this is simply not possible.

“Our independent evaluation of the prototype tools in the Safety Tech Challenge Fund, which include client side scanning mechanisms, concluded that such tools would lead to fundamental breaches of users’ privacy and human rights,” he told TechCrunch. “As researchers we not only work on protecting privacy but also on safeguarding vulnerable users online including protecting children from sex offenders. However, weakening the confidentiality of communications by scanning messages before encryption would weaken privacy for everyone including young people whom the proposed approach aims to protect.”

“There are various means by which any unscrupulous actor can exploit such technologies to monitor communications beyond the intended purpose. Furthermore, historical and recent events, for example the Met police and NI [Northern Ireland] police data breaches, have shown that, even with good security mechanisms in place large-scale data leaks are possible,” he also told us, adding: “We mustn’t build any mechanisms that allow unfettered access to personal communications on a societal scale. We must follow the independent scientific evidence in this regard provided by the Rephrain centre and expert consensus nationally and internationally as otherwise the UK will not be the safest place to live and do business as set out in the National Cyber Strategy.”

We put Rashid’s remarks and Rephrain’s assessment of the Safety Tech projects to the Home Office for a response but at the time of writing it had not got back to us.

Many more privacy and security experts agree the government’s current approach is flawed. An open letter we reported on in July — warning that deploying surveillance technologies would only undermine online safety — was signed by nearly 70 academics.

One of its signatories, Eerke Boiten, a professor in cyber security and head of the School of Computer Science and Informatics at De Montford University, has previously described the Home Office challenge as “intellectually dishonest” — essentially dubbing the whole effort an exercise in government-funded snake oil.

“The essence of end-to-end encryption is that nothing can be known about encrypted information by anyone other than the sender and receiver. Not whether the last bit is a 0, not whether the message is CSAM. The final Rephrain report indeed states there is ‘no published research on computational tools that can prevent CSAM in E2EE’,” he wrote back in March, adding: “Maybe a more honest formulation would have been to look for technologies that can keep users safe from specific kinds of abuse in services where the providers are not including complete surveillance of all service users’ activities.

“This would also remove another intellectual dishonesty in the challenge, namely the suggestion that any techniques developed would apply specifically and only to CSAM, rather than being (ab/re)usable for identifying and restricting other, potentially less obviously undesirable, content — reminders of this are a refrain in the Rephrain evaluations. It would also have eliminated a number of the projects before spending £85K of public money each on full surveillance solutions.”

Asked whether any technology (now) exists that would allow law enforcement to access E2EE content while simultaneously protecting user privacy, Boiten told us: “In my opinion, such technology does not exist. The scientific evaluation of previous Home Secretary Priti Patel’s research competition to explore candidates for such technology (the Safety Tech Challenge) concluded that all submissions had significant problems with protecting privacy, with preventing abuse of such tools, and with transparency, disputability, and accountability.”

If the government is intending to force Meta to implement some form of client-side scanning after its own Safety Tech Challenge — which Boiten notes involved five candidates all pushing “instances” of the tech (“one more place where nobody had any better ideas apparently”) — was so poorly rated by independent experts it hardly bodes well.

The expert consensus is clear that baking in technology which blanket-scans people’s messages does the opposite of protecting users or their privacy. (“After years of leaving it on the shelf, Apple have also just abandoned the idea because they realise they cannot get it to work,” as Boiten also pointed out.)

Oh hi GCHQ!

The Home Office guidance on E2EE and child safety does also cite an academic paper — which is described as being written by “the UK’s leading cryptographers” but is actually authored by U.K. intelligence agency GCHQ’s Crispin Robinson and Dr Ian Levy, technical director of the U.K. National Cyber Security Centre (another arm of GCHQ). A government spokeswoman claimed this paper outlines “a variety of techniques that could be used as part of any potential solution to end-to-end encryption — so both protecting privacy and security whilst also enabling law enforcement action”.

Thing is, Braverman’s remarks today appear to go further — asserting that technology already exists to enable law enforcement access while safeguarding user privacy. Yet in their paper the pair of GCHQ staffers conclude only that it may be possible to configure client-side scanning in a way that mitigates some privacy concerns. Which also implies a rather substantial moving of the goalposts vs the Home Office’s loud trumpeting of ready-to-roll CSAM-scanning tech that completely protects user privacy.

“We have not identified any techniques that are likely to provide as accurate detection of child sexual abuse material as scanning of content, and whilst the privacy considerations that this type of technology raises must not be disregarded, we have presented arguments that suggest that it should be possible to deploy in configurations that mitigate many of the more serious privacy concerns,” the GCHQ staffers argue rather tortuously in the conclusion of their paper.

(For the record, Levy and Robinson also state up front that their work is “not a high level design document”; “not a full security analysis of any particular solution”; and “not a set of requirements that the UK Government wishes to be imposed on commodity services”. “This paper is not an exposition of UK Government policy, nor are any implications that can be read in this document intended to relate to UK Government future policy,” the two civil servants further caveat their work.)

Discussing Braverman’s demand for no end-to-end encryption rollouts “without safety measures” Ross Anderson, a professor of security engineering at the Department of Computer Science and Technology, University of Cambridge — and a veteran of decades of crypto wars — was scathing.

“The government was reassuring people only a few days ago that there was no such technology, so we should relax as they could not enforce the new law until it exists. That was the line used to get the [Online Safety] bill through Parliament. Looks like GCHQ has made a stunning scientific advance this week! We look forward to seeing the details,” he said via email, before going on to dismiss the paper by Levy and Robinson as something he’s already rebutted in his own paper.

“[S]urveillance… has not helped in the past and there is no real prospect that the measures now proposed would help in the future,” he also blogged on the topic recently. “I go through the relevant evidence in my paper and conclude that ‘chatcontrol’ will not improve child protection but damage it instead. It will also undermine human rights at a time when we need to face down authoritarians not just technologically and militarily, but morally as well. What’s the point of this struggle, if not to defend democracy, the rule of law, and human rights?”

Even the NSPCC did not have a straight answer when we asked which “safety” technologies it’s advocating for bolting onto E2EE platforms. But a spokeswoman for the child protection charity duly pointed to the GCHQ paper — claiming “GCHQ and others have made clear that technical solutions are possible” — without articulating exactly which technologies they mean.

She did also name-check SafeToNet, a U.K. safety tech startup that makes money by selling parental controls’ style features and child-location tracking for embedding into third party apps, which she claimed has “developed technology that can identify known and new child abuse material before being sent”.

This is presumably a reference to SafeToNet’s SafeToWatch, a “predictive analysis” technology for detecting CSAM in real-time on the user’s device, per the company’s description — i.e. if it were to be forcefully embedded into E2EE messaging platforms as part of a client side scanning implementation for circumventing strong encryption. (“If WhatsApp is able to scan files for viruses and links for suspicious content without breaking encryption, why is it that scanning for CSAM in the same manner breaks encryption?” SafeToNet opined in a blog post earlier this year in response to a Wired article reporting on WhatsApp’s concerns about the Online Safety Bill.)

“Ultimately if companies are not happy with the technology that is being developed it is for them to invest in finding solutions which they may have to do in the future under the provisions of the Online Safety Bill,” the NSPCC’s spokeswoman added, before further claiming: “But it is not just about scanning. It is about understanding and mitigating the risks of platforms and how they could be heightened with end-to-end encryption.”

The Home Office’s E2EE guidance document is also thick with calls for Meta and other social media firms to nerd harder and come up with novel tech solutions to child safety concerns.

“Currently, Facebook and Instagram account for over 85% of the global referrals of child sexual abuse instances from tech companies,” the Home Office writes. “The implementation of E2EE will significantly reduce the number of monthly referrals of suspected child sex offenders to UK law enforcement. We are urging Meta and other social media companies to work with us and use their vast engineering and technical resources to develop a solution that protects child safety online and suits their platform design best.”

Meta seems to have been anticipating the Home Office’s latest line of attack since it published an updated report today with an overview of its approach to “Safer Private Messaging on Messenger and Instagram Direct Messages” which repeats its rejection of scanning the content of users’ E2EE messages as a proportionate (or even rational) approach to online safety concerns.

“Meta believes that any form of client-side scanning that exposes information about the content of a message without the consent and control of the sender or intended recipients is fundamentally incompatible with user expectations of an E2EE messaging service,” the company writes in the report. “People that use E2EE messaging services rely on a basic promise: That only the sender and intended recipients of a message can know or infer the contents of that message.”

“We strongly believe that E2EE is critical to protecting people’s security. Breaking the promise of E2EE — whether through backdoors or scanning of messages without the user’s consent and control — directly impacts user safety,” it also argues. “The values of safety, privacy, and security are mutually reinforcing; we are committed to delivering on all of them as we move to E2EE as standard for Messenger and Instagram DMs.

“Our goal is to have the safest encrypted messaging service within the industry, and we are committed to our continued engagement with law enforcement and online safety, digital security, and human rights experts to keep people safe. Based on work to date, we are confident we will deliver that and exceed what other comparable encrypted messaging services do.”

Reached for a response to Braverman’s remarks today, a Meta spokesperson also told us:

The overwhelming majority of Brits already rely on apps that use encryption to keep them safe from hackers, fraudsters and criminals. We don’t think people want us reading their private messages so have spent the last five years developing robust safety measures to prevent, detect and combat abuse while maintaining online security. We’re today publishing an update report setting out these measures, such as restricting people over 19 from messaging teens who don’t follow them and using technology to identify and take action against malicious behaviour. As we roll out end-to-end encryption, we expect to continue providing more reports to law enforcement than our peers due to our industry leading work on keeping people safe.

So — for now at least — Meta appears to be holding the line on no client-side scanning on E2EE.

But there’s no doubt the pressure is on with legal liability incoming under the new U.K. law and politicians brandishing the new powers for Ofcom to issue fines that could run into the billions of dollars.

Neverending crypto wars?

A couple of scenarios seem like they could follow at this point: One in which tech firms like Meta are forced, kicking and screaming, via threats of huge financial penalties, towards client-side scanning. Although very strongly stated and public opposition makes that hard to imagine. (As well as Meta, other tech firms that have spokes out against applying surveillance technologies to E2EE include Signal, Apple and Element.)

Indeed, tech firms might be rather more willing to push forward with their own threats to pull services from the U.K., given wider reputational considerations (the U.K. is just one market they operate in, after all) — plus what looks like relatively high leverage in light of the political (and economic) damage the country would suffer if a mainstream service like WhatsApp shut down for local users.

Another — perhaps more plausible — scenario is that shrill U.K. government demands on E2EE platforms for undefined “safety measures” end up morphing into something softer and (dare we say it) more like another fudge: Say a package of checks and feature tweaks which don’t involve any client side scanning and are, therefore, acceptable to platforms. So no blanket surveillance of users but a package of measures platforms can layer on to claim compliance (and even brand as “safety tech”) — such as, say, age verification; limits on how certain features work when the user is a minor; beefed up reporting tools and resourcing; proactive steps to educate users on how to stay safe etc — all with enough fuzziness in the original government demands on them for politicians to claim, down the line, that they tamed the tech giants.

Although age verification may also represent a red line for some: Wikipedia for one has expressed concerns over the Online Safety Bill becoming a vehicle for state censorship if Ofcom ends up mandating that certain types of information are locked behind age gates.

Whatever happens one thing looks clear: The crypto wars will roll on, in some new shape or form, because there are larger forces at play.

Fleshing out his perspective on this in a phone call with TechCrunch, Anderson argues the government is using child safety as a populist excuse to push for a surveillance infrastructure to be embedded into strongly encrypted platforms — in order to enable the kind of blanket access that would be of high interest to intelligence agencies like GCHQ.

“The fact is that everybody uses WhatsApp nowadays. For all sorts of purposes of extreme interest to signals intelligence agencies,” he told us. “None of these guys give a shit about kids except as an excuse… In my paper, on Chat Control or Child Protection?, I pointed out the sort of things that you will do if you actually care about kids. None of them are anything to do with collecting more dirty pictures. None of them.

“Because if you’ve got some weird drunken guy who’s raping his 13 year old stepdaughter in Newcastle, the people who have to deal with that are the police in Newcastle and maybe the school, and maybe the church, and maybe the scouts or whatever. It’s nothing to do with GCHQ. They don’t care. The director of GCHQ will not lose her job as a result of that child being abused.”

Similar arguments about the spread of child pornography were used to push for backdooring encryption in the 1990s, per Anderson. Then after 9/11 terrorism became the go-to ghoul invoked by spy agencies to push for backdooring encryption.

Child safety is just the latest pendulum swing of the same old “playbook”, in his view.

He also points out the U.K. government already has powers, under the 2016 Investigatory Powers Act, to order E2EE platforms to remove encryption to act on specific threats to national security. But targeted (and time-limited) access under emergency procedures and protocols is different to baking in blanket surveillance infrastructure which spooks can dip into via security vulnerabilities that would be introduced into E2EE as a result. Including to more easily grab comms that flow across international borders.

“You cannot get stuff particularly across borders on the basis of an emergency procedure when the emergency no longer takes place,” noted Anderson, adding that Mutual Legal Assistance Treaty asks take time and timely exchanges of information via that established legal route require more competence than government and law enforcement have typically demonstrated. “The things that go wrong in this space are because the Home Office and the police tried to do things at which they are useless,” he argued.

So what’s next? The next round of this latest crypto battle will focus on Ofcom’s consultations on standards it will be enforcing through the Online Safety bill. Anderson predicts a fresh round of academic conferences and activity will be spun up to respond to whatever new outrages emerges via that legislative coloring in. “This is going to be history repeating itself as farce,” he warned.

Anderson also has his eye on the European Union where lawmakers are pushing a similar proposal to drive platforms towards CSAM-scanning — albeit, legal protections for privacy, comms and personal data are stronger there so any move to foist client side scanning on messaging apps would likely be rolled back as disproportionate by European courts. But not having unworkable and unlawful legislation in first place would — obviously — be the better outcome. And so the fight continues.

“The real game is in Europe,” he added. “And we believe that we have probably got a blocking minority in the European Council.”

This report was updated to include comment from DSIT



Source link