I’m currently involved in the UPenn tournament so can’t communicate my forecasts or rationales to maintain experimental conditions, but it’s at least substantially higher than 1⁄10,000.
And yeah, I agree complicated plans where an info-hazard makes the difference are unlikely, but info-hazards also preclude much activity and open communication about scenarios even in general.
I don’t have a deep model of AI—I mostly defer to some bodged-together aggregate of reasonable seeming approaches/people (e.g. Carlsmith/Cotra/Davidson/Karnofsky/Ord/surveys).
I think that it’s one of the problems that explains why many people find my claim far too strong: in the EA community, very few people have a strong inside view on both advanced AIs and biorisks. (I think that’s it’s more generally true for most combinations of cause areas).
And I think that indeed, with the kind of uncertainty one must have when one’s deferring , it becomes harder to do claims as strong as the one I’m making here.
I’m currently involved in the UPenn tournament so can’t communicate my forecasts or rationales to maintain experimental conditions, but it’s at least substantially higher than 1⁄10,000.
And yeah, I agree complicated plans where an info-hazard makes the difference are unlikely, but info-hazards also preclude much activity and open communication about scenarios even in general.
And on AI, do you have timelines + P(doom|AGI)?
I don’t have a deep model of AI—I mostly defer to some bodged-together aggregate of reasonable seeming approaches/people (e.g. Carlsmith/Cotra/Davidson/Karnofsky/Ord/surveys).
I think that it’s one of the problems that explains why many people find my claim far too strong: in the EA community, very few people have a strong inside view on both advanced AIs and biorisks. (I think that’s it’s more generally true for most combinations of cause areas).
And I think that indeed, with the kind of uncertainty one must have when one’s deferring , it becomes harder to do claims as strong as the one I’m making here.