Also we’re currently working with an artist to make a much upgraded background image. Happy to connect you if you’re able to collect up some funding and would like a nice professional map.
plex
Comet (solstice reading)
Pythia
Entity Review: Pythia
Statement on Superintelligence—FLI Open Letter
Utopiography Interview
⿻ Symbiogenesis vs. Convergent Consequentialism
MAISU—Minimal AI Safety Unconference
Nice to see this idea spreading! I bet Hamish would be happy to share the code we use for aisafety.world if that’s helpful. There’s a version on this github, but I’m not certain that’s the latest code. Drop by AED if you’d like to talk.
A Rocket–Interpretability Analogy
AI Safety Memes Wiki
I don’t claim it’s impossible that nature survives an AI apocalypse which kills off humanity, but I do think it’s an extremely thin sliver of the outcome space (<0.1%). What odds would you assign to this?
Thanks! Feel free to leave comments or suggestions on the google docs which make up our backend.
Whether AI would wipe out humans entirely is a separate question (and one which has been debated extensively, to the point where I don’t think I have much to add to that conversation, even if I have opinions)
What I’m arguing for here is narrowly: Would AI which wipes out humans leave nature intact? I think the answer to that is pretty clearly no by default.
(cross posting my reply to your cross-posted comment)
I’m not arguing about p(total human extinction|superintelligence), but p(nature survives|total human extinction from superintelligence), as this conditional probability I see people getting very wrong sometimes.It’s not implausible to me that we survive due to decision theoretic reasons, this seems possible though not my default expectation (I mostly expect Decision theory does not imply we get nice things, unless we manually win a decent chunk more timelines than I expect).
My confidence is in the claim “if AI wipes out humans, it will wipe out nature”. I don’t engage with counterarguments to a separate claim, as that is beyond the scope of this post and I don’t have much to add over existing literature like the other posts you linked.
“If we go extinct due to misaligned AI, at least nature will continue, right? … right?”
AISafety.com – Resources for AI Safety
AI Safety Support has been for a long time a remarkably active in-the-trenches group patching the many otherwise gaping holes in the ecosystem (someone who’s available to talk and help people get a basic understanding of the lie of the land from a friendly face, resources to keep people informed in ways which were otherwise neglected, support around fiscal sponsorship and coaching), especially for people trying to join the effort who don’t have a close connection to the inner circles where it’s less obvious that these are needed.
I’m sad to see the supporters not having been adequately supported to keep up this part of the mission, but excited by JJ’s new project: Ashgro.
I’m also excited by AI Safety Quest stepping up as a distributed, scalable, grassroots version of several of the main duties of AI Safety Support, which are ever more keenly needed with the flood of people who want to help as awareness spreads.
running a big AI Alignment conference
Would you like the domain aisafety.global for this? It’s one of the ones I collected on ea.domains which I’m hoping someone will make use of one day.
I vouch for Severin being highly skilled at mediating conflicts.
Also, smouldering conflicts provide more drag to cohesion execution than most people realize until it’s resolved. Try this out if you have even a slight suspicion it might help.