A bias for expected value calculations in (most) talks
I (somewhat obviously) have a large preference for numeric estimates. I strongly prefer when people presenting their organizations give cost effectiveness numbers in their talks. That said, for the few talks I have seen at EA conferences, I haven’t seen this that much (still much more than other conferences, but that’s not saying that much.)
I would find it very interesting if there could be a standard for most talks proposing or discussing some program to end their talk with some cost effectiveness values or standard types of quantifications.
I think there’s a risk that explicit computations might lead both your audience and yourself to overestimate your own confidence.
Moreover, doing them in a way that’s well-calibrated to potential sources of risk and error is a skill, and I wouldn’t want to suggest to people giving presentations either that they should make something well out of their field of expertise an important part of their talk, or that they shouldn’t give a talk if they’re unable to accurately compute EVs for the things they suggest.
I cautiously like this idea. I wonder if it’s potentially a distraction where people end up spending lots of time trying to prove, or defend, their estimates, rather than give their talks.
Also tricky is the fact expected value estimates require you to take explicit stands of values that might not be very productive. i.e.you get this estimate for the Against Malaria Foundation if you think future people are X important, this is you think death is Y bad, etc.
I think we as a society (or intellectual circle) have a long way to go in terms of understanding EV calcs, but would say here that EV calcs don’t have to be relative to total utility. They could instead be split up into parts in cases where there is uncertainty in how to resolve it from there.
For instance, saying that this intervention ‘saves 1 life per 10k to 30k dollars in region X’ seems fine to me, if it’s a fair interval/estimate. If there are multiple things, maybe, “Every 10k dollars saves 10-30 QALYS in the next 3 years, and separately seems to decrease the long term risks by Y factor”
A bias for expected value calculations in (most) talks
I (somewhat obviously) have a large preference for numeric estimates. I strongly prefer when people presenting their organizations give cost effectiveness numbers in their talks. That said, for the few talks I have seen at EA conferences, I haven’t seen this that much (still much more than other conferences, but that’s not saying that much.)
I would find it very interesting if there could be a standard for most talks proposing or discussing some program to end their talk with some cost effectiveness values or standard types of quantifications.
I think there’s a risk that explicit computations might lead both your audience and yourself to overestimate your own confidence.
Moreover, doing them in a way that’s well-calibrated to potential sources of risk and error is a skill, and I wouldn’t want to suggest to people giving presentations either that they should make something well out of their field of expertise an important part of their talk, or that they shouldn’t give a talk if they’re unable to accurately compute EVs for the things they suggest.
I cautiously like this idea. I wonder if it’s potentially a distraction where people end up spending lots of time trying to prove, or defend, their estimates, rather than give their talks.
Also tricky is the fact expected value estimates require you to take explicit stands of values that might not be very productive. i.e.you get this estimate for the Against Malaria Foundation if you think future people are X important, this is you think death is Y bad, etc.
I think we as a society (or intellectual circle) have a long way to go in terms of understanding EV calcs, but would say here that EV calcs don’t have to be relative to total utility. They could instead be split up into parts in cases where there is uncertainty in how to resolve it from there.
For instance, saying that this intervention ‘saves 1 life per 10k to 30k dollars in region X’ seems fine to me, if it’s a fair interval/estimate. If there are multiple things, maybe, “Every 10k dollars saves 10-30 QALYS in the next 3 years, and separately seems to decrease the long term risks by Y factor”