End of material scarcity or end of our species?
mardi 12 mai 2026
What if we built machienes that are better than almost all humans in almost all things?
They would be able to automate virually every job in the world.
Should we do it?
This talk is one about the risks of AI.
We should talk about the risks of AI now, because AI is powerful, and we need to be sure it is safe before deploying it.
If you have any objections or questions, please feel free to raise them, during the presentation if time allows, or after.
A revolver with 6 chambers, one of which is loaded.
You spin the cylinder, put the gun to your head, and pull the trigger.
With probability 1/6, you die.1


In the 2023 Expert Survey on Progress in AI, the average probability assigned to human extinction (or similar catastrophic outcomes) due to AI was about 16%, about the same as our russian roulette.
Source: AI Impacts 2023 expert survey.
Note: This is to be taken with a grain of salt as answers fluctuate — the median estimates were lower (≈5–10%) — but for sake of argument let’s assume the russian roulette analogy.
We will focus on the existential risks of the alignment problem.
This is what is called the alignment problem
AI progress is very fast.
Researchers at METR say capabilities were doubling every 7 months since 2019 but doubling has possibly accelerated since 2024, e.g. to once every 4 months.
We could run into positive feedback loops like AIs creating better AIs faster than humans could, which then make even better AIs etc.
Capable AIs are a serious competitor.
Usually we think of AGI (Artificial General Intelligence) meaning an AI that is as smart or smarter than almost all humans in almost every task.
According to industry leaders1 a realistic timeframe is in 2-3 years or 10 years
Not a lot. People put down the mark by a lot in the last 5-10 years.
People often think that intelligent systems will be good automatically.
However, capabilities and goals are independent.
One can have good goals and low capabilities to achieve them and bad goals but high capabilities to achieve them, and everything in between.
Intelligence in the AI context is usually defined by the capabilities to achieve goals rather than consciousness.
So we should specify good goals.
Yes, but, this problem is difficult.

The “paperclip maximiser” is a thought experiment about what can go wrong when an intelligent system pursues a poorly specified objective with extreme competence.
For an intelligent AI system the possibility of humans shutting it off is part of the model of the world.
It could anticipate us shutting it off and stop us or act in a way that we wouldn’t.
This is a testable hypothesis, and it has in fact been tested.1
What do you think was the result?


https://shoggoth.monster/
Are you able to produce a global pandemic that any terrorist could just have access to with a single prompt? - No of course not.
However, their internal dialogue suggests that the AIs are aware of being in a testing scenario.
Further AIs start to construct their own (non-human) language to communicate with each other. This makes undertanding their chain of thought very difficult.
When is a good time to start thinking about AI safety?
Nick Bostrom warns that even not building AGI could be an existential risk.
Example: Meteroid comes to crash on earth, we don’t know how to stop it. With more intelligence we could have.
Career in AI safety? (Well paid field.)
Lobbyism? (Writing to your local politician.)
Education. (The higher people estimate the risk the lower it becomes.)
…
Modern AI systems are supervised by other AI systems. Humans are in the loop but only very rarely.
We could have AIs that create more intelligent AIs, which could spiral out of control fast.
Even if there is a human in the loop we are not intelligent enough to understand each plot of a superintelligent ai.
Stop AI progress for as long as we need to to make it safe. Yes, this has been done in other areas of science as well, e.g. Genetic engineering. There aren’t genetically modified people left and right, because regulation worked.
There are many open letters adressing this.
There are people who want to avoid regulations.
Regulations like the Texas stop AI child pornography act would be penalised.
Contact your local legislator.
Do manifestations.
Talk to other people about it.
Paradoxically, the higher people perceive the risk, the smaller the risk becomes.

https://www.assemblee-nationale.fr/dyn/vos-deputes https://futureoflife.org/take-action/
People with concern about the existential risk of AI.
Alan Turing. I.J. Good. Norbert Wiener. Marvin Minsky. Bill Gates. Stuart Russell.

Cytopia | Cité Internationale Universitaire de Paris