Related to this, I think some aspects of the post were predictably off-putting to people who aren’t already in these communities—examples include the specific citations* used (e.g. Holden’s post which uses a silly sounding acronym [PASTA], and Ajeya’s report which is in the unusual-to-most-people format of several Google Docs and is super long), and a style of writing that likely comes off as strange to people outside of these communities (“you can roughly model me as”; “all of this AI stuff”).
*some of this critique has to do with the state of the literature, not just the selection thereof. But insofar as there is a serious interest here in engaging with folks outside of EA/rationalists/longtermists (not clear to me if this is the case), then either the selections could have been more careful or caveated, or new ones could have been created.
I’ve also seen online pushback against the phrasing as a conditional probability: commenters felt putting a number on it is nonsensical because the events are (necessarily) poorly defined and there’s way too much uncertainty.
Do you also think this yourself? I don’t clearly see what worlds look like, where P (doom | AGI) would be ambiguous in hindsight? Some mayor accident because everything is going too fast?
There are some things we would recognize as an AGI, but others (that we’re still worried about) are ambiguous. There are some things we would immediately recognize as ‘doom’ (like extinction) but others are more ambiguous (like those in Paul Christiano’s “what failure looks like”, or like a seemingly eternal dictatorship).
Related to this, I think some aspects of the post were predictably off-putting to people who aren’t already in these communities—examples include the specific citations* used (e.g. Holden’s post which uses a silly sounding acronym [PASTA], and Ajeya’s report which is in the unusual-to-most-people format of several Google Docs and is super long), and a style of writing that likely comes off as strange to people outside of these communities (“you can roughly model me as”; “all of this AI stuff”).
*some of this critique has to do with the state of the literature, not just the selection thereof. But insofar as there is a serious interest here in engaging with folks outside of EA/rationalists/longtermists (not clear to me if this is the case), then either the selections could have been more careful or caveated, or new ones could have been created.
I’ve also seen online pushback against the phrasing as a conditional probability: commenters felt putting a number on it is nonsensical because the events are (necessarily) poorly defined and there’s way too much uncertainty.
Do you also think this yourself? I don’t clearly see what worlds look like, where P (doom | AGI) would be ambiguous in hindsight? Some mayor accident because everything is going too fast?
There are some things we would recognize as an AGI, but others (that we’re still worried about) are ambiguous. There are some things we would immediately recognize as ‘doom’ (like extinction) but others are more ambiguous (like those in Paul Christiano’s “what failure looks like”, or like a seemingly eternal dictatorship).
I sort of view AGI as a standin for powerful optimization capable of killing us in AI Alignment contexts.
Yeah, I think I would count these as unambigous in hindsight. Though siren Worlds might be an exception.