Digital platforms should be forced to detail their algorithms and the ways they have sought to minimise online abuse, a government review has recommended as part of a package of reforms designed to fight online harms perpetuated through popular networks.
Under the proposed new regulations – contained within the final report of the Select Committee on Social Media and Online Safety’s Inquiry into Social Media and Online Safety – the eSafety Commissioner would explore ways to improve transparency of the operation of social media algorithms and services.
This includes a deep technical analysis, potential regulation of end-to-end encryption “in the context of harm prevention”, documentation of harm-reduction activities, and examination of the ways social media firms “actively apply different standards to victims of abuse” depending on whether victims are public figures or if their work requires them to have a social media presence.
The 26 recommendations – which also include educational, research and enforcement activities for students, schools, family support services, law-enforcement officials and others – are geared towards creation of a statutory duty of care that would require digital platforms to proactively help their users fight online abuse.
Social media companies would be required to mandate maximum privacy and safety settings for all users under 18 years of age, while the committee also recommended optional parental controls be mandated “for all technology manufacturers and providers”.
An online complaints service would direct victims of online abuse to a service appropriate for their status as children, parents, women, people from culturally and linguistically diverse backgrounds, and other “relevant vulnerable groups”.
The recommendations “are an important next step in making our online world and social media platforms safer for all,” Committee chair Lucy Wicks said in releasing the report – which reflects over 3 months of activity including eleven public hearings, nearly 60 witnesses, and over 100 submissions.
“The government must continue to hold social media companies to account and support victims of abuse,” Wicks continued, warning that “for too long social media platforms have been able to ‘set the rules’, enabling proliferation of online abuse.”
“The balance of responsibility for the safety of users online, which until recently has been primarily on users, must be flipped to ensure that social media platforms bear more of the burden of providing safety for their users.”
Shouldering the burden – or dropping the ball?
The range of submissions to the enquiry highlighted the many ways that social media abuse is often left to fester unchecked by platforms like Twitter, Facebook, and Tik Tok.
Ready access to “visual and comparative” pro-eating disorder sites, for example, construes “validating and triggering behaviour that is known to intensify the deleterious impact of eating disorders,” advocacy group Eating Disorders Families Australia noted in a submission warning that digital platforms’ facilitation of access to such content is “inherently problematic for young people battling with eating disorders.”
Similar concerns were echoed by support group Butterfly Foundation, online bullying advocacy group Dolly’s Dream – founded after a 14-year-old cyber bullying victim took her own life – and advocates concerned about social media’s harmful impact in areas such as obesity, gender equity, and child abuse.
“There are little or no protections in place to prevent Australia’s children from predatory marketing practices in the digital world,” the Victorian Health Promotion Foundation (VicHealth) warned. “Children cannot compete with the pernicious and ever-evolving tactics used by harmful industries.”
Fully 79 per cent of Australians believe digital platforms should be responsible for content on their sites, the Australian Communications Consumer Action Network (ACCAN) said in citing recent research that also found a similar percentage agreeing that “more needs to be done to protect people’s safety and privacy online”.
Some 47 per cent of respondents to the ACCAN survey said they don’t trust digital platforms to act in their best interests, while just 27 per cent believe the government is doing enough to “make sure digital platforms do the right thing”.
Online platforms Google, Meta, Twitter, and Tik Tok called out a range of human and technological measures they’re using to identify and manage harmful content.
Twitter, for one, has been testing a new ‘downvote’ feature designed to highlight harmful or unhelpful content while Tik Tok argued that a “mixture of technology and human interventions” had helped it remove over 81.5 million videos during the June 2021 quarter alone.
Meta – whose Facebook and Instagram platforms are regularly called out as facilitators of online abuse – recently partnered with RMIT FactLab to fight misinformation; updated its bullying and harassment policies; has been working to remove harmful networks of users working to cause “social harm”; and promoted the efforts of its 35,000-strong safety and security teams to “combat hate speech and harmful content”.
Yet while some analysts noted growing momentum for a duty of care by digital giants, civil-liberties group The Pirate Party has urged caution – warning that the eSafety Commissioner’s efforts are “an application of moral panic to online spaces” and calling for “better safeguards against abuse, and [assurances] that there is no undermining of encryption.”