Facebook has revealed it disabled 583 million fake accounts in the first three months of this year alone.

Since then, measures of Facebook’s true audience – and, by extension, its value to advertisers – have become somewhat more murky.

That number of disabled accounts, published in the company’s inaugural Community Standards Enforcement Report, was the first glimpse into the significant extent of fraud that Facebook deals with on a daily basis.

Fake accounts represent approximately 3 to 4 percent of monthly users, the company said, noting that most of the fraudulent accounts were removed “within minutes of registration”.

Facebook’s charm offensive

Facebook has been working overtime to position itself as a protector of its online users in the wake of the Cambridge Analytica scandal, that saw tens of millions of users’ details siphoned off the social media platform without their knowledge.

That incident, which caused the collapse of that analytics firm and pushed Facebook CEO Mark Zuckerberg to testify before the US Congress, and prompted an investigation by the Office of the Australian Information Commissioner (OAIC) after revelations that more than 300,000 Australians had been caught up in the scandal.

By publishing its long-secreted community standards and reporting its proactivity to stop fraud, the company is clearly trying to substantiate its claims that it wants to fall on the right side of the privacy debate.

Yet Facebook may also be sending mixed messages to many of its users: in April, for example, it changed its terms of service for 1.5 billion non-European users to minimise its exposure to the European Union’s general data protection regulation (GDPR), which comes into effect on May 25.

Even as users weigh their ambivalence about Facebook’s attempts to balance privacy against the undeniable value of their personal data, revelations about the high fraud rates are likely to cause consternation amongst advertisers, who pay handsomely to target their target messages to a user community that is – by Facebook’s admission – as much as 26.5% fictitious.

That could change the calculations of advertisers that use cost per thousand (cpm) as a core metric to judge the cost-effectiveness of their advertising spend.

Facebook advertising specialist AdEspresso, for one, warns that CPM is “the most unpredictable bidding method and can lead to a significant spend without any results.”

While Facebook tracks CPM, cost per 1000 people reached and many other metrics for its advertising clients through their advertising dashboards, there is little indication of how a changing understanding of Facebook’s size will affect bid-based advertising prices.

Machine learning

The revelations also shed more light on the company’s use of artificial intelligence (AI) and machine learning (ML), upon which it is leaning heavily in an effort to keep up with the flood of fraudulent, offensive, and illegal activity on its platform.

AI-based tools helped the company find and flag nearly all of the 837 million pieces of spam posts spotted during the quarter, as well as identifying 21 million pieces of “adult nudity and sexual activity” and 3.5 million “pieces of violent content”.

They are not, by Facebook’s own admission, as effective identifying hate speech which can be harder to spot due to its linguistic ambiguity.

AI as arbiter of social norms

The revelations also raise the spectre of social media giants’ interference in the everyday activities of their platforms, with no representation about false positives that may have led to legitimate pages and users to be removed from the platform.

Facebook isn’t the only social-media giant heading down this path.

Twitter, for its part, last year formed a partnership with IBM to use its Watson AI engine – which the company has actively spruiked to private enterprise and government in Australia and abroad – to help identify and purge problematic tweets.

That effort led to the January notification of more than 1 million Twitter users that they had been exposed to 3814 Russia-linked Twitter accounts, which had been unceremoniously deleted.

This month, however, Twitter created another storm by revealing that it was actively muting tweets that were judged – by AI – to represent “behaviours that distort and detract from the public conversation”.

Twitter claims the new “healthy conversation” approach has reduced abuse reports from searches by 4 percent, and abuse reports from conversations by 8 percent. The company also says that less than 1 percent of accounts “make up the majority of accounts reported for abuse”.

That’s well below the proportion outed by Facebook, but abusive accounts and fake accounts aren’t necessarily the same thing.

Both have an impact on the quality of the experience for social media users – and increasing efforts to reel in low-quality usage are likely to continue shaping the way legitimate users engage with the platforms in the long term.