Clarifying the kind of timelines work I think is low-importance:
I think there’s value in distinguishing worlds like “1% chance of AGI by 2100” versus “10+% chance”, and distinguishing “1% chance of AGI by 2050” versus “10+% chance”.
So timelines work enabling those updates was good.[1]
But I care a lot less about, e.g., “2/3 by 2050” versus “1/3 by 2050″.
And I care even less about distinguishing, e.g., “30% chance of AGI by 2030, 80% chance of AGI by 2050” from “15% chance of AGI by 2030, 50% chance of AGI by 2050″.
Though I think it takes very little evidence or cognition to rationally reach 10+% probability of AGI by 2100.
One heuristic way of seeing this is to note how confident you’d need to be in ‘stuff like the deep learning revolution (as well as everything that follows it) won’t get us to AGI in the next 85 years’, in order to make a 90+% prediction to that effect.
Notably, you don’t need a robust or universally persuasive 10+% in order to justify placing the alignment problem at or near the top of your priority list.
You just needs that to be your subjective probability at all, coupled with a recognition that AGI is an absurdly big deal and aligning the first AGI systems looks non-easy.
Clarifying the kind of timelines work I think is low-importance:
I think there’s value in distinguishing worlds like “1% chance of AGI by 2100” versus “10+% chance”, and distinguishing “1% chance of AGI by 2050” versus “10+% chance”.
So timelines work enabling those updates was good.[1]
But I care a lot less about, e.g., “2/3 by 2050” versus “1/3 by 2050″.
And I care even less about distinguishing, e.g., “30% chance of AGI by 2030, 80% chance of AGI by 2050” from “15% chance of AGI by 2030, 50% chance of AGI by 2050″.
Though I think it takes very little evidence or cognition to rationally reach 10+% probability of AGI by 2100.
One heuristic way of seeing this is to note how confident you’d need to be in ‘stuff like the deep learning revolution (as well as everything that follows it) won’t get us to AGI in the next 85 years’, in order to make a 90+% prediction to that effect.
Notably, you don’t need a robust or universally persuasive 10+% in order to justify placing the alignment problem at or near the top of your priority list.
You just needs that to be your subjective probability at all, coupled with a recognition that AGI is an absurdly big deal and aligning the first AGI systems looks non-easy.
What about distinguishing 50% by 2050 vs. 50% by 2027?