… if only they had allowed people not to publish on EA Forum, LessWrong, and Alignment Forum :)
Honestly, it seems like a mistake to me to not allow other ways of submission. For example, some people may not want to publicly apply for a price or be associated with our communities. An additional submission form might help with that.
Related to this, I think some aspects of the post were predictably off-putting to people who aren’t already in these communities—examples include the specific citations* used (e.g. Holden’s post which uses a silly sounding acronym [PASTA], and Ajeya’s report which is in the unusual-to-most-people format of several Google Docs and is super long), and a style of writing that likely comes off as strange to people outside of these communities (“you can roughly model me as”; “all of this AI stuff”).
*some of this critique has to do with the state of the literature, not just the selection thereof. But insofar as there is a serious interest here in engaging with folks outside of EA/rationalists/longtermists (not clear to me if this is the case), then either the selections could have been more careful or caveated, or new ones could have been created.
I’ve also seen online pushback against the phrasing as a conditional probability: commenters felt putting a number on it is nonsensical because the events are (necessarily) poorly defined and there’s way too much uncertainty.
Do you also think this yourself? I don’t clearly see what worlds look like, where P (doom | AGI) would be ambiguous in hindsight? Some mayor accident because everything is going too fast?
There are some things we would recognize as an AGI, but others (that we’re still worried about) are ambiguous. There are some things we would immediately recognize as ‘doom’ (like extinction) but others are more ambiguous (like those in Paul Christiano’s “what failure looks like”, or like a seemingly eternal dictatorship).
I’m partly sympathetic to the idea of allowing submissions in other forums or formats.
However, I think it’s likely to be very valuable to the Future Fund and the prize judges, when sorting through potentially hundreds or thousands of submissions, to be able to see upvotes, comments, and criticisms from EA Forum, Less Wrong, and Alignment Forum, which is where many of the subject matter experts hang out. This will make it easier to identify essays that seem to get a lot of people excited, and that don’t contain obvious flaws or oversights.
very valuable… to be able to see upvotes, comments, and criticisms from EA Forum, Less Wrong, and Alignment Forum, which is where many of the subject matter experts hang out.
I think it’s the opposite. Only those experts who already share views similar to the FF (or more pessimistic) are there, and they’d introduce a large bias.
Yes, that makes sense. How about stating that reasoning and thereby nudging participants to post in the EA forum/LessWrong/Alignment Forum, but additionally have a non-public submission form? My guess would be that only a small number of participants would then submit via the form, so the amount of additional work should be limited. This bet seems better to me than the current bet where you might miss really important contributions.
… if only they had allowed people not to publish on EA Forum, LessWrong, and Alignment Forum :)
Honestly, it seems like a mistake to me to not allow other ways of submission. For example, some people may not want to publicly apply for a price or be associated with our communities. An additional submission form might help with that.
Related to this, I think some aspects of the post were predictably off-putting to people who aren’t already in these communities—examples include the specific citations* used (e.g. Holden’s post which uses a silly sounding acronym [PASTA], and Ajeya’s report which is in the unusual-to-most-people format of several Google Docs and is super long), and a style of writing that likely comes off as strange to people outside of these communities (“you can roughly model me as”; “all of this AI stuff”).
*some of this critique has to do with the state of the literature, not just the selection thereof. But insofar as there is a serious interest here in engaging with folks outside of EA/rationalists/longtermists (not clear to me if this is the case), then either the selections could have been more careful or caveated, or new ones could have been created.
I’ve also seen online pushback against the phrasing as a conditional probability: commenters felt putting a number on it is nonsensical because the events are (necessarily) poorly defined and there’s way too much uncertainty.
Do you also think this yourself? I don’t clearly see what worlds look like, where P (doom | AGI) would be ambiguous in hindsight? Some mayor accident because everything is going too fast?
There are some things we would recognize as an AGI, but others (that we’re still worried about) are ambiguous. There are some things we would immediately recognize as ‘doom’ (like extinction) but others are more ambiguous (like those in Paul Christiano’s “what failure looks like”, or like a seemingly eternal dictatorship).
I sort of view AGI as a standin for powerful optimization capable of killing us in AI Alignment contexts.
Yeah, I think I would count these as unambigous in hindsight. Though siren Worlds might be an exception.
I’m partly sympathetic to the idea of allowing submissions in other forums or formats.
However, I think it’s likely to be very valuable to the Future Fund and the prize judges, when sorting through potentially hundreds or thousands of submissions, to be able to see upvotes, comments, and criticisms from EA Forum, Less Wrong, and Alignment Forum, which is where many of the subject matter experts hang out. This will make it easier to identify essays that seem to get a lot of people excited, and that don’t contain obvious flaws or oversights.
I think it’s the opposite. Only those experts who already share views similar to the FF (or more pessimistic) are there, and they’d introduce a large bias.
Yes, that makes sense. How about stating that reasoning and thereby nudging participants to post in the EA forum/LessWrong/Alignment Forum, but additionally have a non-public submission form? My guess would be that only a small number of participants would then submit via the form, so the amount of additional work should be limited. This bet seems better to me than the current bet where you might miss really important contributions.