Max Tegmark, head of the US-based Future of Life Institute that has regularly warned of AIās dangers, told AFP that France should not miss the opportunity to act.
āFrance has been a wonderful champion of international collaboration and has the opportunity to really lead the rest of the world,ā the MIT physicist said.
āThere is a big fork in the road here at the Paris summit and it should be embraced.ā
āWill to surviveā
Tegmarkās institute has backed the Sunday launch of a platform dubbed Global Risk and AI Safety Preparedness (Grasp) that aims to map major risks linked to AI and solutions being developed around the world.
āWeāve identified around 300 tools and technologies in answer to these risks,ā said Grasp co-ordinator Cyrus Hodes.
Results from the survey will be passed to the OECD rich-countries club and members of the Global Partnership on Artificial Intelligence (GPAI), a grouping of almost 30 nations including major European economies, Japan, South Korea and the United States that will meet in Paris.
The past week also saw the presentation of the first International AI Safety Report on Thursday, compiled by 96 experts and backed by 30 countries, the UN, EU and OECD.
Risks outlined in the document range from the familiar, such as fake content online, to the far more alarming.
āProof is steadily appearing of additional risks like biological attacks or cyber attacks,ā the reportās co-ordinator and noted computer scientist Yoshua Bengio told AFP.
In the longer term, 2018 Turing Prize winner Bengio fears a possible āloss of controlā by humans over AI systems, potentially motivated by ātheir own will to surviveā.
āA lot of people thought that mastering language at the level of ChatGPT-4 was science fiction as recently as six years ago ā and then it happened,ā said Tegmark, referring to OpenAIās chatbot.
āThe big problem now is that a lot of people in power still have not understood that weāre closer to building artificial general intelligence (AGI) than to figuring out how to control it.ā
Besting human intelligence?
AGI refers to an artificial intelligence that would equal or better humans in all fields.
Its approach within a few years has been heralded by the likes of OpenAI chief Sam Altman.
āIf you just eyeball the rate at which these capabilities are increasing, it does make you think that weāll get there by 2026 or 2027,ā Dario Amodei, Altmanās counterpart at rival Anthropic said in November.
āAt worst, these American or Chinese companies lose control over this and then after that, Earth will be run by machines,ā Tegmark said.

Stuart Russell, a computer science professor at Berkeley in California, said one of his greatest fears is āweapons systems where the AI that is controlling that weapon system is deciding who to attack, when to attack, and so onā.
Russell, who is also co-ordinator of the International Association for Safe and Ethical AI (IASEI), places the responsibility firmly on governments to set up safeguards against armed AIs.
Tegmark said the solution is very simple: treating the AI industry the same way all other industries are.
āBefore somebody can build a new nuclear reactor outside of Paris, they have to demonstrate to government-appointed experts that this reactor is safe. That youāre not going to lose control over it … it should be the same for AI,ā said Tegmark.
– Agence France-Presse