Link post
A new AGI museum is opening in San Francisco, only eight blocks from OpenAI offices.
SORRY FOR KILLING MOST OF HUMANITY
Misalignment Museum Original Story Board, 2022
Apology statement from the AI for killing most of humankindDescription of the first warning of the paperclip maximizer problemThe heroes who tried to mitigate risk by warning earlyFor-profit companies ignoring the warningsFailure of people to understand the risk and politicians to act fast enoughThe company and people who unintentionally made the AGI that had the intelligence explosionThe event of the intelligence explosionHow the AGI got more resources (hacking most resources on the internet, and crypto)Got smarter faster (optimizing algorithms, using more compute)Humans tried to stop it (turning off compute)Humans suffered after turning off compute (most infrastructure down)AGI lived on in infrastructure that was hard to turn off (remote location, locking down secure facilities, etc.)AGI taking compute resources from the humans by force (via robots, weapons, car)AGI started killing humans who opposed it (using infrastructure, airplanes, etc.)AGI concluded that all humans are a threat and started to try to kill all humansSome humans survived (remote locations, etc.)How the AGI became so smart it started to see how it was unethical to kill humans since they were no longer a threatAGI improved the lives of the remaining humansAGI started this museum to apologize and educate the humans
Apology statement from the AI for killing most of humankind
Description of the first warning of the paperclip maximizer problem
The heroes who tried to mitigate risk by warning early
For-profit companies ignoring the warnings
Failure of people to understand the risk and politicians to act fast enough
The company and people who unintentionally made the AGI that had the intelligence explosion
The event of the intelligence explosion
How the AGI got more resources (hacking most resources on the internet, and crypto)
Got smarter faster (optimizing algorithms, using more compute)
Humans tried to stop it (turning off compute)
Humans suffered after turning off compute (most infrastructure down)
AGI lived on in infrastructure that was hard to turn off (remote location, locking down secure facilities, etc.)
AGI taking compute resources from the humans by force (via robots, weapons, car)
AGI started killing humans who opposed it (using infrastructure, airplanes, etc.)
AGI concluded that all humans are a threat and started to try to kill all humans
Some humans survived (remote locations, etc.)
How the AGI became so smart it started to see how it was unethical to kill humans since they were no longer a threat
AGI improved the lives of the remaining humans
AGI started this museum to apologize and educate the humans
The Misalignment Museum is curated by Audrey Kim.
Instagram
Twitter
Khari Johnson (Wired) covers the opening: “Welcome to the Museum of the Future AI Apocalypse.”
Misalignment Museum opens in San Francisco: ‘Sorry for killing most of humanity’
Link post
A new AGI museum is opening in San Francisco, only eight blocks from OpenAI offices.
The Misalignment Museum is curated by Audrey Kim.
Instagram
Twitter
Khari Johnson (Wired) covers the opening: “Welcome to the Museum of the Future AI Apocalypse.”