When you say “we do not invest in _ research”, do you mean EAs specifically, or all humans? It’s worth noting some people not associated with EA will probably do research in each area regardless.
The probability that if we do not invest in X-risk reduction research (but we invest in wild animal suffering reduction research instead), humans will go extinct and animals will not go extinct, and if we do invest in that X-risk research, humans will not go extinct, is p.
I’m having trouble understanding this probability. I don’t think it can be interpreted as a single event (even conditionally), unless you’re thinking of probabilities over probabilities or probabilities over statements, not actual events that can happen at specific times and places (or over intervals of time, regions in space).
Letting
X = humans go extinct
XA = non-human animals go extinct
RX = we invest in X-risk reduction research (or work, in general)
RW = we invest in WAS research (or work, in general)
Then the probability of “if we do not invest in X-risk reduction research (but we invest in wild animal suffering reduction research instead), humans will go extinct and animals will not go extinct” looks like
P(XandnotXA∣∣(notRX)andRW)
while the probability of “if we do invest in that X-risk research, humans will not go extinct” looks like
P(notX|RX)
The events being conditioned on between these two probabilities are not compatible since the first has notRX, while the second has RX. So, I’m not sure taking their product would be meaningful either. I think it would make more sense to multiply these two probabilities by the expected value of their corresponding events and just compare them. In general, you would calculate:
E[V|RX=r,RW=s]
Where V is the value, RX is now the level of investment in X-risk work, RW is now the level of investment in WAS work and V is the aggregate value. Then you would compare this for different values of r and s, i.e. different levels of investment (or compare the partial derivatives with respect to each of r and s, at a given level of r and s; this would tell you the marginal expected value of extra resources going to each of X-risk work and WAS work).
With X being 1 if humans go extinct and 0 otherwise (the indicator function), XA being 1 if non-humans animals go extinct and 0 otherwise, and V depending on them, that expected value could further be broken down to get
This probability is the product of the probability that there will be a potential extinction event (e.g. 10%), the probability that, given such an event, the extra research in X-risk reduction (with the resources that would otherwise have gone to wild animal suffering research) to avoid that extinction event is both necessary and sufficient to avoid human extinction (e.g. 1%) and the probability that animals will survive the extinction event even if humans do not (e.g. 1%).
But you’re conditioning on the probability of a potential extinction event as if X-risk reduction research has no effect on it, only the probability of actual human extinction from that event; X-risk research aims to address both.
The probability that A is “both necessary and sufficient” for B is also a bit difficult to think about. One way might be the following, but I think this would be difficult to work with, too:
When you say “we do not invest in _ research”, do you mean EAs specifically, or all humans? It’s worth noting some people not associated with EA will probably do research in each area regardless.
I’m having trouble understanding this probability. I don’t think it can be interpreted as a single event (even conditionally), unless you’re thinking of probabilities over probabilities or probabilities over statements, not actual events that can happen at specific times and places (or over intervals of time, regions in space).
Letting
X = humans go extinct
XA = non-human animals go extinct
RX = we invest in X-risk reduction research (or work, in general)
RW = we invest in WAS research (or work, in general)
Then the probability of “if we do not invest in X-risk reduction research (but we invest in wild animal suffering reduction research instead), humans will go extinct and animals will not go extinct” looks like
while the probability of “if we do invest in that X-risk research, humans will not go extinct” looks like
The events being conditioned on between these two probabilities are not compatible since the first has not RX, while the second has RX. So, I’m not sure taking their product would be meaningful either. I think it would make more sense to multiply these two probabilities by the expected value of their corresponding events and just compare them. In general, you would calculate:
Where V is the value, RX is now the level of investment in X-risk work, RW is now the level of investment in WAS work and V is the aggregate value. Then you would compare this for different values of r and s, i.e. different levels of investment (or compare the partial derivatives with respect to each of r and s, at a given level of r and s; this would tell you the marginal expected value of extra resources going to each of X-risk work and WAS work).
With X being 1 if humans go extinct and 0 otherwise (the indicator function), XA being 1 if non-humans animals go extinct and 0 otherwise, and V depending on them, that expected value could further be broken down to get
You specify further that
But you’re conditioning on the probability of a potential extinction event as if X-risk reduction research has no effect on it, only the probability of actual human extinction from that event; X-risk research aims to address both.
The probability that A is “both necessary and sufficient” for B is also a bit difficult to think about. One way might be the following, but I think this would be difficult to work with, too: