I think that if you take these infohazards seriously enough, you probably even shouldn’t do that. Because if everyone has a 95% likelihood to keep it secret, with 10 persons in the know is 60%.
I see what you mean, but if you value cause prioritization seriously enough, it is really stifling to have literally no place to discuss x-risks in detail. Carefully managed private spaces are the best compromise I’ve seen so far, but if there’s something better then I’d be really glad to learn about it.
I think that I’d be glad to stay as long as we can in the domain of aggregate probabilities and proxies of real scenarios, particularly for biorisks. Mostly because I think that most people can’t do a lot about infohazardy things so the first-order effect is just net negative.
Yes I mostly agree but even conditional on info hazardy things I still think that the aggregate probability of likelihood of collapse is a very important parameter.
I’m not sure what you mean—I agree the aggregate probability of collapse is an important parameter, but I was talking about the kinds of bio-risk scenarios that simeon_c was asking for above? Do I understand you right that overall risk levels should be estimated/communicated even though their components might involve info-hazards? If so, I agree, and it’s tricky. They’ll likely be some progress on this over the next 6-12 months with Open Phil’s project to quantify bio-risk, and (to some extent) the results of UPenn’s hybrid forecasting/persuasion tournament on existential risks.
Thanks for this information! What’s the probability we go extinct due to biorisks by 2045 according to you?
Also, I think that things that are extremely infohazardy shouldn’t be thought of too strongly bc without the info revelation they will likely remain very unlikely
I’m currently involved in the UPenn tournament so can’t communicate my forecasts or rationales to maintain experimental conditions, but it’s at least substantially higher than 1⁄10,000.
And yeah, I agree complicated plans where an info-hazard makes the difference are unlikely, but info-hazards also preclude much activity and open communication about scenarios even in general.
I don’t have a deep model of AI—I mostly defer to some bodged-together aggregate of reasonable seeming approaches/people (e.g. Carlsmith/Cotra/Davidson/Karnofsky/Ord/surveys).
I think that it’s one of the problems that explains why many people find my claim far too strong: in the EA community, very few people have a strong inside view on both advanced AIs and biorisks. (I think that’s it’s more generally true for most combinations of cause areas).
And I think that indeed, with the kind of uncertainty one must have when one’s deferring , it becomes harder to do claims as strong as the one I’m making here.
Also, I think that things that are extremely infohazardy shouldn’t be thought of too strongly bc without the info revelation they will likely remain very unlikely
I don’t think this reasoning works in general. A highly dangerous technology could become obvious in 2035, but we could still want actors to not know about it until as late as possible. Or the probability of a leak over the next 10 years could be high, yet it could still be worth trying to maintain secrecy.
Yes, I think you’re right actually. Here’s a weaker claim which is I think it true: - When someone knows and has thought on a infohazard, the baseline is that he’s way more likely to cause harm via it than to cause good. - Thus, I’d recommend anyone who’s not actively thinking about ways to solve to prevent classes of scenario where this infohazard would end up being very bad, to try to forget this infohazard and not talk about it even to trusted individuals. Otherwise it will most likely be net negative.
Sketching specific bio-risk extinction scenarios would likely involve substantial info-hazards.
You could avoid such infohazards by drawing up the scenarios in a private message or private doc that’s only shared with select people.
I think that if you take these infohazards seriously enough, you probably even shouldn’t do that. Because if everyone has a 95% likelihood to keep it secret, with 10 persons in the know is 60%.
I see what you mean, but if you value cause prioritization seriously enough, it is really stifling to have literally no place to discuss x-risks in detail. Carefully managed private spaces are the best compromise I’ve seen so far, but if there’s something better then I’d be really glad to learn about it.
I think that I’d be glad to stay as long as we can in the domain of aggregate probabilities and proxies of real scenarios, particularly for biorisks.
Mostly because I think that most people can’t do a lot about infohazardy things so the first-order effect is just net negative.
Yes I mostly agree but even conditional on info hazardy things I still think that the aggregate probability of likelihood of collapse is a very important parameter.
I’m not sure what you mean—I agree the aggregate probability of collapse is an important parameter, but I was talking about the kinds of bio-risk scenarios that simeon_c was asking for above?
Do I understand you right that overall risk levels should be estimated/communicated even though their components might involve info-hazards? If so, I agree, and it’s tricky. They’ll likely be some progress on this over the next 6-12 months with Open Phil’s project to quantify bio-risk, and (to some extent) the results of UPenn’s hybrid forecasting/persuasion tournament on existential risks.
Thanks for this information!
What’s the probability we go extinct due to biorisks by 2045 according to you?
Also, I think that things that are extremely infohazardy shouldn’t be thought of too strongly bc without the info revelation they will likely remain very unlikely
I’m currently involved in the UPenn tournament so can’t communicate my forecasts or rationales to maintain experimental conditions, but it’s at least substantially higher than 1⁄10,000.
And yeah, I agree complicated plans where an info-hazard makes the difference are unlikely, but info-hazards also preclude much activity and open communication about scenarios even in general.
And on AI, do you have timelines + P(doom|AGI)?
I don’t have a deep model of AI—I mostly defer to some bodged-together aggregate of reasonable seeming approaches/people (e.g. Carlsmith/Cotra/Davidson/Karnofsky/Ord/surveys).
I think that it’s one of the problems that explains why many people find my claim far too strong: in the EA community, very few people have a strong inside view on both advanced AIs and biorisks. (I think that’s it’s more generally true for most combinations of cause areas).
And I think that indeed, with the kind of uncertainty one must have when one’s deferring , it becomes harder to do claims as strong as the one I’m making here.
I don’t think this reasoning works in general. A highly dangerous technology could become obvious in 2035, but we could still want actors to not know about it until as late as possible. Or the probability of a leak over the next 10 years could be high, yet it could still be worth trying to maintain secrecy.
Yes, I think you’re right actually.
Here’s a weaker claim which is I think it true:
- When someone knows and has thought on a infohazard, the baseline is that he’s way more likely to cause harm via it than to cause good.
- Thus, I’d recommend anyone who’s not actively thinking about ways to solve to prevent classes of scenario where this infohazard would end up being very bad, to try to forget this infohazard and not talk about it even to trusted individuals. Otherwise it will most likely be net negative.