The rapid development of artificial intelligence technology poses an “existential risk” that could lead to the deaths of “many, many people”, the former boss of Google has warned.
Eric Schmidt, who served as CEO of Google from 2001 to 2011 and as executive chair until 2015, sounded the alarm on AI last week at the Wall Street Journal’s CEO Council Summit, likening it to the development of nuclear technology and saying it could be “misused by evil people”.
“My concern with AI is actually existential, and existential risk is defined as many, many, many people harmed or killed,” Schmidt told the summit.
“And there are scenarios not today but reasonably soon, where these systems will be able to find zero-day exploits in cyber issues or discover new kinds of biology. This is fiction today, but the reasoning is likely to be true.
“And when that happens we want to be ready to know how to make sure these things are not misused by evil people.”
Schmidt compared the rise of AI to nuclear technology.
“Nuclear had the property that there was a scarcity, which was enriched uranium,” he said.
“We are alive today because it was really hard to get that.”
Regulation is important to ensure AI isn’t “misused by evil people”, but Schmidt said he doesn’t have the full solution to the huge issue and that it’s a matter for society as a whole.
The former Google boss did push back against concerns that the rise of AI will lead to
“large-scale job losses.”
“Here are the facts. We are not having enough children, and we have not been having enough children for long enough that there is a demographic crisis where people who are my age are going to be taken care of by younger generations,” Schmidt said.
“In aggregate, all the demographics say there’s going to be a shortage of humans for jobs. Literally too many jobs and not enough people for at least the next 30 years.”
Schmidt is the latest in several high-profile tech figures to sound the alarm over the rapid rise of AI and its implications for society as a whole.
Recently, OpenAI CEO Sam Altman told a US Senate hearing that he is “nervous” about the future of AI, specifically its ability to manipulate people via “one-on-one…interactive disinformation”. Altman put forward a licensing scheme for companies that develop AI “above a certain scale of capabilities”.
“Given that we’re going to face an election next year and these models are getting better, I think this is a significant area of concern,” Altman said.
“Some regulation would be quite wise on this topic.”
OpenAI released its generative AI tool ChatGPT in November last year, with its popularity skyrocketing and leading to significant concerns over data collection and misinformation, among others.
Earlier this year a group of more than 1,000 high-profile executives, including Elon Musk, signed an open letter calling for a six-month pause in the development of AI models until a robust regulatory system is put in place.
In Europe, lawmakers have reached an early agreement on the European Artificial Intelligence Act, which would regulate the development and use of AI systems.
Generative AI tools such as ChatGPT are now well and truly in the mainstream, and used by many people in their day-to-day lives, but not without issues. This week it was revealed that a lawyer in the US admitted to using ChatGPT to conduct research for a case, leading to him submitting a number of “bogus” cases to the court.