As the upcoming federal election campaign moves online, digital platforms and social media are gearing up to counter the spread of misleading information online.
While politicians should play a responsible role, digital platforms also have a responsibility to take vigilant, careful action to identify and address misinformation and disinformation during the Australian election campaign, according to Dr Anne Kruger, APAC director of First Draft, a news verification organisation affiliated with the University of Technology Sydney’s Centre for Media Transition.
Although there is a distinction to be made between misinformation, which is unknowingly sharing incorrect or unsubstantiated information, and disinformation, which is knowingly or maliciously distributing falsehoods designed to mislead or manipulate, they both pose challenges to fair elections.
The major platforms have released details about transparency around ads and support for fact checking resources, however, Kruger said a multi-pronged approach is needed.
“There is more work to be done to educate audiences about how to find reliable information and practise critical thinking, as well as a re-think about the design and delivery of that reliable information so that it reaches audiences where they are,” she told Information Age.
“It’s important the platforms continue to refine and improve their tools and levers, as an election is an easy target and opportunity for the creation of misleading information, which can take many forms — memes, articles, audio messages to videos, screenshots and even comments on social media posts,” she said.
Is it digital whack-a-mole?
First Draft has collaborated with the Digital Industry Group Inc (DIGI) to develop a code of practice to reduce the spread of disinformation and misinformation, including during elections, with Adobe, Apple, Facebook, Google, Microsoft, Redbubble, TikTok and Twitter signing on.
Actions they commit to take include labelling false content, demoting the ranking of content, prioritising credible sources, suspension or disabling of accounts and removal of content.
DIGI managing director Sunita Bose said the code has an opt-in commitment for platforms that offer political advertising.
“All of the social media signatories have opted in to enable their users to better identify the source of political advertising so that there is transparency,” she told Information Age.
While the code needs to balance freedom of expression with political communication and prevailing laws, it contains protections for authorised statements by political parties.
Even so, it has been strongly criticised for being voluntary and having no enforcement provisions or penalties.
Bose also noted many signatories are working with the Australian Election Commission (AEC) “to put in place arrangements to cooperate in the enforcement of their misinformation and disinformation policies, as well as electoral laws more broadly”.
For its part, the AEC has now launched a register that lists major pieces of disinformation it has discovered.
What are the digital platforms doing to counter misinformation?
The online disinformation and misinformation code includes a new independent oversight service that monitors digital platforms' actions, checks transparency reports and reviews its own complaints handling processes.
Among the digital platforms, Twitter’s position is that content moderation is just as important as content removal and has put more focus on labelling potentially misleading content.
It also recently launched a new beta feature to deal with misinformation, allowing users to flag content they believe to be misleading.
YouTube is deploying machine learning and human moderators to remove violative content, push up authoritative sources and reduce the spread of problematic content.
Google has said advertisers of paid election ads must be verified and has launched a political advertising transparency report in Australia, a cumulative record that tallies up the number and value of political advertising since November 2020, although it doesn’t specifically isolate political advertising for the life of this year’s campaign.
Also working with First Draft, Facebook parent company Meta is trying to combat misinformation and election interference across both Instagram and Facebook.
It’s expanding the fact-checking program in Australia to include RMIT FactLab, joining Agence France Presse and Australian Associated Press to review and rate content. In regard to political advertising, advertisers must be authorised using government-issued photo ID, and place a “Paid for by” disclaimers on their ads.
Even social sharing platform Pinterest has measures in place, not accepting campaign ads and not monetising election-related content. It also limits election-related content in places like the home feed, related Pins, notifications or “more ideas” within a board.
Yet First Draft has already picked up attempts to introduce false narratives into the Australian election, with one former senator tweeting that the AEC is proposing to acquire and use the same Dominion Voting Systems machines to count votes as was used in the last US election.
“We have seen the same sentiments and questions about voting machines in the comments on a YouTube recording by a minor party. The recording is still available and viewers were urged to help increase follower numbers exponentially,” said Kruger.
Is political advertising flouting digital platform ad library rules?
The issue of political scare campaigns is another area of focus for First Draft because they may likely contravene platform advertising rules.
While politicians regularly use negative campaigns and “attack ads” to target opposing candidates and policies, First Draft said that in 2019, debunked information was used as part of an online political scare campaign about inheritance tax.
To examine where politicians may be skirting the platforms’ ad libraries rules, which can fly under the radar of misinformation yet have similar effects, it’s now actively monitoring and using content analysis of the platform’s ad libraries to analyse the impacts.
“These ads can target demographics closely – by location, age, gender – and users need to understand, and question why they are seeing ads, rather than just simply keep scrolling and letting the information sink in unconsciously, as famously happened overseas when Facebook ads targeted vulnerable Brexit swing voters,” Kruger said.