I consistently get an error message when I try to set the CI to 50% in the OpenPhil bar (and the URL is crazy long!)
Why do we have probability distributions over values that are themselves probabilities? I feel like this still just boils down to a single probability in the end.
Why do we sometimes use $/DALY and sometimes DALYs/$? It seems unnecessarily confusing. Eg:
If you really want both maybe have a button users can toggle? Otherwise just sticking with one seems best.
“Three days of suffering represented here is the equivalent of three days of such suffering as to render life not worth living.” OK, but what if life is worse than 0, surely we need a way to represent this as well? My vague memory from the moral weights series was that you assumed valence is symmetric about 0, so perhaps the more sensible unit would be the negative of the value of a fully content life.
“The intervention is assumed to produce between 160 and 3.6K suffering-years per dollar (unweighted) condition on chickens being sentient.” This input seems unhelpfully coarse-grained, as it seems to hide a lot of the interesting steps and doesn’t tell me anything about how these numbers are estimated, and it is not the sort of thing I can intelligently just choose my own numbers for. Also it should be conditional
In the small-scale biorisk project, I never seem to get more than about 1000 DALYs per $1000, even when I crank expansion speed to 0.9c and length of future to 1e8, and the annual extinction risk in era 4 to 1e-8. Why is this? Yes 150,000 is too few, but I thought I should at least see some large effect when I change key parameters by several OOMs. Not really sure what is going on here, I’ll be interested if you replicate this, and whether there is a bug or I am just misunderstanding something.
Thanks for your engagement and these insightful questions.
I consistently get an error message when I try to set the CI to 50% in the OpenPhil bar (and the URL is crazy long!)
That sounds like a bug. Thanks for reporting!
(The URL packs in all the settings, so you can send it to someone else—though I’m not sure this is working on the main page. To do this, it needs to be quite long.)
Why do we have probability distributions over values that are themselves probabilities? I feel like this still just boils down to a single probability in the end.
You’re right, it does. Generally, the aim here is just conceptual clarity. It can be harder to assess the combination of two probability assignments than those assignments individually.
Why do we sometimes use /DALYandsometimesDALYs/? It seems unnecessarily confusing.
Yeah. It has been a point of confusion within the team too. The reason for cost per DALY is that is a metric that is often used by people making allocation decisions. However, it isn’t a great representation for Monte Carlo simulations where a lot of outcomes involve no effect, because the cost per DALY is effectively infinite. This has some odd implications. For our purposes, DALYs per $1000 is a better representation. To try to accommodate both considerations, we include both values in different places.
OK, but what if life is worse than 0, surely we need a way to represent this as well? My vague memory from the moral weights series was that you assumed valence is symmetric about 0, so perhaps the more sensible unit would be the negative of the value of a fully content life.
The issue here is that interventions can affect different levels of suffering. For instance, a corporate campaign might include multiple asks that affect animals in different ways. We could have made the model more complicated by incorporating its effect on each different level. Instead, we simplified by ‘summarizing’ the impact with one level. We calibrated with research on the impact of similar afflictions in humans. You can represent a negative value just by choosing a higher number of hours than actually suffered. Think of it in terms of the amount of normal life that that suffering would balance out. If it is really bad, one hour of suffering might be as bad as weeks of normal life would be good.
The intervention is assumed to produce between 160 and 3.6K suffering-years per dollar (unweighted) condition on chickens being sentient.” This input seems unhelpfully coarse-grained, as it seems to hide a lot of the interesting steps and doesn’t tell me anything about how these numbers are estimated, and it is not the sort of thing I can intelligently just choose my own numbers for.
There is a balance between accuracy and model configurability. In some places, we want to include numbers that are based on other research that we thought was likely to be accurate, but where we couldn’t directly translate into the parameters of the model. I would like to convert those assessments into the terms of the model, maybe backtracking to see what parameters get a similar answer, but this wasn’t a priority.
In the small-scale biorisk project, I never seem to get more than about 1000 DALYs per $1000, even when I crank expansion speed to 0.9c and length of future to 1e8, and the annual extinction risk in era 4 to 1e-8. Why is this? Yes 150,000 is too few, but I thought I should at least see some large effect when I change key parameters by several OOMs. Not really sure what is going on here, I’ll be interested if you replicate this, and whether there is a bug or I am just misunderstanding something.
Our estimates include both calculations of catastrophic events and extinction. For the small-scale biorisk, the chance of a catastrophic event is relatively high, but the chance of extinction is low. I think you’re seeing the results of catastrophic events and no extinction events. When I up the probability of extinction to be higher, and include the far future, I see very large numbers. (E.g. https://bit.ly/ccm-bio-high-risk).
Thanks, that all makes sense, yes I think that is it with the biorisk intervention, that I was only ever seeing a catastrophic event prevented and not an extinction event. For the cost/DALY or DALY/cost, I think making this conversion manually is trivial, so it would makes most sense to me to just report the DALYs/cost and let someone take the inverse themselves if they want the other unit.
For the cost/DALY or DALY/cost, I think making this conversion manually is trivial, so it would makes most sense to me to just report the DALYs/cost and let someone take the inverse themselves if they want the other unit.
Note E(1/X) differs from 1/E(X), so one cannot get the mean cost per DALY from the inverse of the mean DALYs per cost. However, I guess the model only asks for values of the cost per DALY to define distributions? If so, since such values do not refer to expectations, I agree converting from $/DALY to DALY/$ can be done by just taking the inverse.
Ah good point that we cannot in general swap the order of the expectation operator and an inverse. For scenarios where the cost is fixed, taking the inverse would be fine, but if both the cost and the impact are variable, then yes it becomes harder, and less meaningful I think if the amount of impact could be 0.
Several (hopefully) minor issues:
I consistently get an error message when I try to set the CI to 50% in the OpenPhil bar (and the URL is crazy long!)
Why do we have probability distributions over values that are themselves probabilities? I feel like this still just boils down to a single probability in the end.
Why do we sometimes use $/DALY and sometimes DALYs/$? It seems unnecessarily confusing. Eg:
If you really want both maybe have a button users can toggle? Otherwise just sticking with one seems best.
“Three days of suffering represented here is the equivalent of three days of such suffering as to render life not worth living.”
OK, but what if life is worse than 0, surely we need a way to represent this as well? My vague memory from the moral weights series was that you assumed valence is symmetric about 0, so perhaps the more sensible unit would be the negative of the value of a fully content life.
“The intervention is assumed to produce between 160 and 3.6K suffering-years per dollar (unweighted) condition on chickens being sentient.” This input seems unhelpfully coarse-grained, as it seems to hide a lot of the interesting steps and doesn’t tell me anything about how these numbers are estimated, and it is not the sort of thing I can intelligently just choose my own numbers for. Also it should be conditional
In the small-scale biorisk project, I never seem to get more than about 1000 DALYs per $1000, even when I crank expansion speed to 0.9c and length of future to 1e8, and the annual extinction risk in era 4 to 1e-8. Why is this? Yes 150,000 is too few, but I thought I should at least see some large effect when I change key parameters by several OOMs. Not really sure what is going on here, I’ll be interested if you replicate this, and whether there is a bug or I am just misunderstanding something.
Thanks for your engagement and these insightful questions.
That sounds like a bug. Thanks for reporting!
(The URL packs in all the settings, so you can send it to someone else—though I’m not sure this is working on the main page. To do this, it needs to be quite long.)
You’re right, it does. Generally, the aim here is just conceptual clarity. It can be harder to assess the combination of two probability assignments than those assignments individually.
Yeah. It has been a point of confusion within the team too. The reason for cost per DALY is that is a metric that is often used by people making allocation decisions. However, it isn’t a great representation for Monte Carlo simulations where a lot of outcomes involve no effect, because the cost per DALY is effectively infinite. This has some odd implications. For our purposes, DALYs per $1000 is a better representation. To try to accommodate both considerations, we include both values in different places.
The issue here is that interventions can affect different levels of suffering. For instance, a corporate campaign might include multiple asks that affect animals in different ways. We could have made the model more complicated by incorporating its effect on each different level. Instead, we simplified by ‘summarizing’ the impact with one level. We calibrated with research on the impact of similar afflictions in humans. You can represent a negative value just by choosing a higher number of hours than actually suffered. Think of it in terms of the amount of normal life that that suffering would balance out. If it is really bad, one hour of suffering might be as bad as weeks of normal life would be good.
There is a balance between accuracy and model configurability. In some places, we want to include numbers that are based on other research that we thought was likely to be accurate, but where we couldn’t directly translate into the parameters of the model. I would like to convert those assessments into the terms of the model, maybe backtracking to see what parameters get a similar answer, but this wasn’t a priority.
Our estimates include both calculations of catastrophic events and extinction. For the small-scale biorisk, the chance of a catastrophic event is relatively high, but the chance of extinction is low. I think you’re seeing the results of catastrophic events and no extinction events. When I up the probability of extinction to be higher, and include the far future, I see very large numbers. (E.g. https://bit.ly/ccm-bio-high-risk).
Thanks, that all makes sense, yes I think that is it with the biorisk intervention, that I was only ever seeing a catastrophic event prevented and not an extinction event. For the cost/DALY or DALY/cost, I think making this conversion manually is trivial, so it would makes most sense to me to just report the DALYs/cost and let someone take the inverse themselves if they want the other unit.
Hi Oscar,
Note E(1/X) differs from 1/E(X), so one cannot get the mean cost per DALY from the inverse of the mean DALYs per cost. However, I guess the model only asks for values of the cost per DALY to define distributions? If so, since such values do not refer to expectations, I agree converting from $/DALY to DALY/$ can be done by just taking the inverse.
Ah good point that we cannot in general swap the order of the expectation operator and an inverse. For scenarios where the cost is fixed, taking the inverse would be fine, but if both the cost and the impact are variable, then yes it becomes harder, and less meaningful I think if the amount of impact could be 0.