A new programming AI system from DeepMind, Alphabet’s experimental AI research arm, can perform complex coding tasks better than the average programmer and could lead to the AI that can change itself.

AlphaCode was trained using open source code from GitHub and was put to the test solving abstract coding problems from the competitive programming platform Codeforces.

These are tests of programming nous that typically involve solving an esoteric programming problem from a qualitative description.

It’s a difficult thing for an AI to differentiate AlphaCode from OpenAI’s Codex which can interpret natural language requests into written code.

AlphaCode performed in the top 54 per cent of participants in coding competitions held on Codeforces late last year, which is quite a feat for a machine that first has to parse the complex description of the problem written in natural language before it begins coding.

“Going from ‘what the problem is’ to ‘how to solve the problem’ is a great leap that requires understanding and reasoning about the problem, as well as a deep comprehension of a wide range of algorithms and data structures,” the authors of a recent paper about AlphaCode wrote.

“This leap is a significant difference from previous works, which tend to explicitly specify what to implement.”

AlphaCode was trained on 715GB of code in most major languages like C++, C#, Go, Java, JavaScript, Python, Ruby, and PHP.

DeepMind’s researchers hinted what AlphaCode could lead to down the track including what the authors describe as “advanced AI risks” such as an AI system that is able to re-write its own underlying code.

“Coding capabilities could lead to systems that can recursively write and improve themselves, rapidly leaning to more and more advanced systems,” the researchers warn.

Yes, this coding bot could lead to the singularity – a hypothetical moment when an AI system quickly makes changes to itself in what philosopher Nick Bostrom has described as “an intelligence explosion” that could lead to a superintelligent, and not necessarily benign, AI.

Professor Scott Aaronson from the University of Texas’s computer science department described the paper as “absolutely astounding” but isn’t sold on the idea of AlphaCode leading to the singularity.

“Judged against where AI was 20-25 years ago, when I was a student, a dog is now holding meaningful conversations in English,” he said in a blog post.

“It’s not obvious how you go from solving programming contest problems to conquering the human race or whatever, but I feel pretty confident that we’ve now entered a world where ‘programming’ will look different.”

Other risks identified with AlphaCode include intellectual property concerns as the researchers note that there is debate around whether it is “fair use” to train an AI on publicly available code like GitHub, even if the data is filtered for certain licenses.

“There still remains the decision of how to credit and use people’s code, even with permissive licenses.”

Breakthroughs in automated coding could also lead to a glut of programmers, the researchers warn, although they said the effects of automation could be overstated since “writing code is only a portion of the job”.

They also note that other times when coding has been automated, such as through the development of integrated development environments (IDE), “have only moved programmers to higher levels of abstraction”.