In early January, Monika Bickert, Facebook’s vice president of global policy management, announced that deep fakes will be joining nudity, hate speech, and graphic violence on the list of Facebook’s categories of banned content.

Considering social media users share millions of photos and videos daily across accounts, policing such content will be no easy feat.

To be banned, Facebook says a video has to meet two criteria: it must be manipulated “in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say,” and it must be “the product of artificial intelligence or machine learning.”

Deep fake technology uses deep neural networks to convincingly replace, for example, one face with another in a video or image.

It has been around for quite some time and was originally used in a rudimentary way to alter pornography.

Dealing with disinformation is a key role for social media platforms.

They need to curate and police their patch of public space (for which they make handsome profits) and want users constantly on their platforms.

Just like the term ‘fake news’ has been popularised, so too will ‘deep fakes’.

Of particular concern is not just the doctoring of images and video, but questioning the veracity of legitimate images and video is becoming a simple throwaway line.

Everything from media reporting to courtroom hearings, there is a fear that real evidence can be plausibly dismissed as fake.

Image databases, AI, detection algorithms, watermarks and smart phone apps which verify photos and videos have all been suggested as part of the solution.

But whilst technology will play an important role, there is no substitute for human oversight to moderate and verify content.

Consumer education is also key.

Whilst we have a multitude of online safety and security programs for end users (with stats telling us most don’t work), we need to ensure internet users understand that people and organisations will falsify content as a commercial, political or social tactic with the aim of deliberately causing confusion.

We also have to prove to internet users when a real thing is real, but also ensure they believe the real thing is real.

This is a two-step pass.

There is a fine balance between educating the public to question more, but not questioning everything.