Yes I mostly agree but even conditional on info hazardy things I still think that the aggregate probability of likelihood of collapse is a very important parameter.
I’m not sure what you mean—I agree the aggregate probability of collapse is an important parameter, but I was talking about the kinds of bio-risk scenarios that simeon_c was asking for above? Do I understand you right that overall risk levels should be estimated/communicated even though their components might involve info-hazards? If so, I agree, and it’s tricky. They’ll likely be some progress on this over the next 6-12 months with Open Phil’s project to quantify bio-risk, and (to some extent) the results of UPenn’s hybrid forecasting/persuasion tournament on existential risks.
Thanks for this information! What’s the probability we go extinct due to biorisks by 2045 according to you?
Also, I think that things that are extremely infohazardy shouldn’t be thought of too strongly bc without the info revelation they will likely remain very unlikely
I’m currently involved in the UPenn tournament so can’t communicate my forecasts or rationales to maintain experimental conditions, but it’s at least substantially higher than 1⁄10,000.
And yeah, I agree complicated plans where an info-hazard makes the difference are unlikely, but info-hazards also preclude much activity and open communication about scenarios even in general.
I don’t have a deep model of AI—I mostly defer to some bodged-together aggregate of reasonable seeming approaches/people (e.g. Carlsmith/Cotra/Davidson/Karnofsky/Ord/surveys).
I think that it’s one of the problems that explains why many people find my claim far too strong: in the EA community, very few people have a strong inside view on both advanced AIs and biorisks. (I think that’s it’s more generally true for most combinations of cause areas).
And I think that indeed, with the kind of uncertainty one must have when one’s deferring , it becomes harder to do claims as strong as the one I’m making here.
Also, I think that things that are extremely infohazardy shouldn’t be thought of too strongly bc without the info revelation they will likely remain very unlikely
I don’t think this reasoning works in general. A highly dangerous technology could become obvious in 2035, but we could still want actors to not know about it until as late as possible. Or the probability of a leak over the next 10 years could be high, yet it could still be worth trying to maintain secrecy.
Yes, I think you’re right actually. Here’s a weaker claim which is I think it true: - When someone knows and has thought on a infohazard, the baseline is that he’s way more likely to cause harm via it than to cause good. - Thus, I’d recommend anyone who’s not actively thinking about ways to solve to prevent classes of scenario where this infohazard would end up being very bad, to try to forget this infohazard and not talk about it even to trusted individuals. Otherwise it will most likely be net negative.
Yes I mostly agree but even conditional on info hazardy things I still think that the aggregate probability of likelihood of collapse is a very important parameter.
I’m not sure what you mean—I agree the aggregate probability of collapse is an important parameter, but I was talking about the kinds of bio-risk scenarios that simeon_c was asking for above?
Do I understand you right that overall risk levels should be estimated/communicated even though their components might involve info-hazards? If so, I agree, and it’s tricky. They’ll likely be some progress on this over the next 6-12 months with Open Phil’s project to quantify bio-risk, and (to some extent) the results of UPenn’s hybrid forecasting/persuasion tournament on existential risks.
Thanks for this information!
What’s the probability we go extinct due to biorisks by 2045 according to you?
Also, I think that things that are extremely infohazardy shouldn’t be thought of too strongly bc without the info revelation they will likely remain very unlikely
I’m currently involved in the UPenn tournament so can’t communicate my forecasts or rationales to maintain experimental conditions, but it’s at least substantially higher than 1⁄10,000.
And yeah, I agree complicated plans where an info-hazard makes the difference are unlikely, but info-hazards also preclude much activity and open communication about scenarios even in general.
And on AI, do you have timelines + P(doom|AGI)?
I don’t have a deep model of AI—I mostly defer to some bodged-together aggregate of reasonable seeming approaches/people (e.g. Carlsmith/Cotra/Davidson/Karnofsky/Ord/surveys).
I think that it’s one of the problems that explains why many people find my claim far too strong: in the EA community, very few people have a strong inside view on both advanced AIs and biorisks. (I think that’s it’s more generally true for most combinations of cause areas).
And I think that indeed, with the kind of uncertainty one must have when one’s deferring , it becomes harder to do claims as strong as the one I’m making here.
I don’t think this reasoning works in general. A highly dangerous technology could become obvious in 2035, but we could still want actors to not know about it until as late as possible. Or the probability of a leak over the next 10 years could be high, yet it could still be worth trying to maintain secrecy.
Yes, I think you’re right actually.
Here’s a weaker claim which is I think it true:
- When someone knows and has thought on a infohazard, the baseline is that he’s way more likely to cause harm via it than to cause good.
- Thus, I’d recommend anyone who’s not actively thinking about ways to solve to prevent classes of scenario where this infohazard would end up being very bad, to try to forget this infohazard and not talk about it even to trusted individuals. Otherwise it will most likely be net negative.