I made this simple high-level diagram of critical longtermist “root factors”, “ultimate scenarios”, and “ultimate outcomes”, focusing on the impact of AI during the TAI transition.
This involved some adjustments to standard longtermist language. “Accident Risk” → “AI Takeover ”Misuse Risk” → “Human-Caused Catastrophe” “Systemic Risk” → This is spit up into a few modules, focusing on “Long-term Lock-in”, which I assume is the main threat.
You can read interact with it here, where there are (AI-generated) descriptions and pages for things.
Curious to get any feedback!
I’d love it if there could eventually be one or a few well-accepted and high-quality assortments like this. Right now some of the common longtermist concepts seem fairly unorganized and messy to me.
---
Reservations:
This is an early draft. There’s definitely parts I find inelegant. I’ve played with the final nodes instead being things like, “Pre-transition Catastrophe Risk” and “Post-Transition Expected Value”, for instance. I didn’t include a node for “Pre-transition value”; I think this can be added on, but would involve some complexity that didn’t seem worth it at this stage. The lines between nodes were mostly generated by Claude and could use more work.
This also heavily caters to the preferences and biases of the longtermist community, specifically some of the AI safety crowd.
Just finding about about this & crux website. So cool. Would love to see something like this for charity ranking (if it isn’t already somewhere on the site).
Don’t you need a philosophy axioms layer between outputs and outcomes? Existential catastrophe definitions seems to be assuming a lot of things.
Would also need to think harder about why/in what context i’m using this but “governance” being a subcomponent when it’s arguably more important/ can control literally everything else at the top level seems wrong.
>Would love to see something like this for charity ranking (if it isn’t already somewhere on the site). I could definitely see this being done in the future.
>Don’t you need a philosophy axioms layer between outputs and outcomes? I’m nervous that this can get overwhelming quickly. I like the idea of starting with things that are clearly decision-relevant to the certain audience the website has, then expanding from there. Am open to ideas on better / more scalable approaches!
>”governance” being a subcomponent when it’s arguably more important/ can control literally everything else at the top level seems wrong. Thanks! I’ll keep in mind. I’d flag that this is an extremely high-level diagram, meant more to be broad and elegant than to flag which nodes are most important. Many critical things are “just subcomponents”. I’d like to make further diagrams on many of the different smaller nodes.
I made this simple high-level diagram of critical longtermist “root factors”, “ultimate scenarios”, and “ultimate outcomes”, focusing on the impact of AI during the TAI transition.
This involved some adjustments to standard longtermist language.
“Accident Risk” → “AI Takeover
”Misuse Risk” → “Human-Caused Catastrophe”
“Systemic Risk” → This is spit up into a few modules, focusing on “Long-term Lock-in”, which I assume is the main threat.
You can read interact with it here, where there are (AI-generated) descriptions and pages for things.
Curious to get any feedback!
I’d love it if there could eventually be one or a few well-accepted and high-quality assortments like this. Right now some of the common longtermist concepts seem fairly unorganized and messy to me.
---
Reservations:
This is an early draft. There’s definitely parts I find inelegant. I’ve played with the final nodes instead being things like, “Pre-transition Catastrophe Risk” and “Post-Transition Expected Value”, for instance. I didn’t include a node for “Pre-transition value”; I think this can be added on, but would involve some complexity that didn’t seem worth it at this stage. The lines between nodes were mostly generated by Claude and could use more work.
This also heavily caters to the preferences and biases of the longtermist community, specifically some of the AI safety crowd.
Just finding about about this & crux website. So cool. Would love to see something like this for charity ranking (if it isn’t already somewhere on the site).
Don’t you need a philosophy axioms layer between outputs and outcomes? Existential catastrophe definitions seems to be assuming a lot of things.
Would also need to think harder about why/in what context i’m using this but “governance” being a subcomponent when it’s arguably more important/ can control literally everything else at the top level seems wrong.
Good points!
>Would love to see something like this for charity ranking (if it isn’t already somewhere on the site).
I could definitely see this being done in the future.
>Don’t you need a philosophy axioms layer between outputs and outcomes?
I’m nervous that this can get overwhelming quickly. I like the idea of starting with things that are clearly decision-relevant to the certain audience the website has, then expanding from there. Am open to ideas on better / more scalable approaches!
>”governance” being a subcomponent when it’s arguably more important/ can control literally everything else at the top level seems wrong.
Thanks! I’ll keep in mind. I’d flag that this is an extremely high-level diagram, meant more to be broad and elegant than to flag which nodes are most important. Many critical things are “just subcomponents”. I’d like to make further diagrams on many of the different smaller nodes.