It may have originally been intended to protect citizens from online harm, but scope creep could see the UK’s new ‘duty of care’ legislation put legal heft behind ‘Five Eyes’ governments’ long-running efforts to access social media users’ encrypted communications.
New reports suggest that ministers from Five Eyes countries – Australia, New Zealand, UK, US, and Canada – will meet this week to discuss using the new UK legislation to cudgel recalcitrant social media giants into providing government agencies with access to private communications.
A UK government white paper last year outlined its proposed approach – which emphasises the importance of a “culture of transparency, trust and accountability” that would force social media firms to audit harmful content and the measures they are taking to address it.
Compliance with the enforcement regime would be overseen by an industry-funded independent regulator which, the government posited, would also “have powers to require additional information... including about the operation of algorithms.”
The word ‘encryption’ is conspicuously absent from the 98-page document, with just one instance of the term, relating to how online child grooming “moves across different platforms”.
A new twist on an old concept
Despite ongoing threats to clamp down on social media giants in the US, the UK legislation would lead the world in imposing a legally-enforceable duty of care on online platforms.
Just as the common-law idea of ‘duty of care’ has been applied to a wealth of different situations, interest from Five Eyes partners suggests they see the new laws as allowing them to force social media companies to decrypt messages between customers.
Pushback from Apple, Facebook, Google and other social media giants remains a thorn in the side of police and intelligence services that have struggled to access encrypted mobile devices they say are protecting terrorists and paedophiles.
Facebook’s resistance – which stems in large part from the Privacy Manifesto that CEO Mark Zuckerberg penned last year after the company’s Cambridge Analytica privacy debacle – has been particularly problematic given that it owns dominant social media platforms Facebook, Instagram, and WhatsApp.
“I believe the future of communication will increasingly shift to private, encrypted services where people can be confident what they say to each other stays secure and their messages and content won't stick around forever,” Zuckerberg wrote amidst law enforcement bodies’ increasing calls for access to customer data.
Under ‘zero knowledge’ encryption frameworks, that data is often protected by encryption that the social media firms often can’t break – but that hasn’t stopped government agencies from demanding access to it.
Where does liability start and end?
Australia’s leadership against live-streaming violence kick-started a global consensus around forcing social media firms to police their content, and its Assistance and Access Act has singularly and controversially focused on letting the government order the breaking of encryption.
Home Affairs Minister Peter Dutton is also calling for unprecedented powers allowing the Australian Signals Directorate (ASD)’s surveillance arsenal to be turned on criminal suspects within in Australia.
Yet rather than focusing on specific technologies, the UK’s idea of subjecting social media providers to a broader duty of care could shift the onus onto those companies to ensure that potentially harmful communications can be accessed and blocked.
“The design choices made by the companies in constructing these platforms are not neutral,” a trio of legal experts wrote in an analysis of the proposed laws that called it “a sensible regulatory framework on which to proceed”.
“Every pixel a user sees on an online service is there as a result of decisions taken by the company the operates it,” they wrote. “Companies have to own responsibility for reasonably foreseeable matters that arise from operation of their service.
If the use of their encryption platforms for ‘online harms’ like terrorism and paedophilia can be reasonably foreseen, the logic goes, online platforms would be breaking the law if they don’t provide a mechanism to help stop such harms.
This novel legal structure would distance the UK government from the perception that it’s forcing companies to build technological back doors into their products.
“The problem with backdoors, as well-intentioned as they are, is that they are engineered vulnerabilities and bad actors inevitably find them,” Internet Society senior director of policy strategy and development Konstantinos Komaitis warned.
“This means for every crime you stop with a backdoor, you enable countless others... the criminal world effectively gets their hands on a digital skeleton key.
Whether the new legislation represents more carrot or stick, remains to be seen – but the fact that Google, Facebook and Twitter all singled out the new legislation as a potential threat suggests that the battle is only getting started.