(I’m guessing you mean difference-making risk aversion here, based on your options being implicitly compared to doing nothing.)
When considering the potential of larger indirect effects on wild invertebrates, the far future and other butterfly effects, which interventions do you think look good (better than doing nothing) on difference-making risk aversion (or difference-making ambiguity aversion)?
I think I mean something slightly different than difference-making risk aversion, but I see what you’re saying. I don’t even know if I’m arguing against EV maximization—more just trying to point out that EV alone doesn’t feel like it fully captures the picture of the value I care about (e.g. likelihood of causing harm relative to doing nothing feels like another important thing). I think specifically, that there are plausible circumstances where I am more likely than not to cause additional harm, and in expectation that action has positive EV, feels concerning. I imagine lots of AI risk work could be like this: doing some research project has some strong chance of advancing capabilities a bit (high probability of a bit of negative value), but maybe a very small chance of massively reducing risk (low probability of tons of positive value). The EV looks good, but my median outcome will be the world being worse than it was if I hadn’t done anything.
Ok, that makes sense. I’d guess butterfly effects would be neutral in the median difference. The same could be the case for indirect effects on wild animals and the far future, although I’d say it’s highly ambiguous (imprecise probabilities) and something to be clueless about, and not precisely neutral about.
Would you say you care about the overall distribution of differences, too, and not just the median and the EV?
(I’m guessing you mean difference-making risk aversion here, based on your options being implicitly compared to doing nothing.)
When considering the potential of larger indirect effects on wild invertebrates, the far future and other butterfly effects, which interventions do you think look good (better than doing nothing) on difference-making risk aversion (or difference-making ambiguity aversion)?
(I suspect there are none for modest levels of difference-making risk/​ambiguity aversion, and we should be thinking about difference-making in different ways.)
I think I mean something slightly different than difference-making risk aversion, but I see what you’re saying. I don’t even know if I’m arguing against EV maximization—more just trying to point out that EV alone doesn’t feel like it fully captures the picture of the value I care about (e.g. likelihood of causing harm relative to doing nothing feels like another important thing). I think specifically, that there are plausible circumstances where I am more likely than not to cause additional harm, and in expectation that action has positive EV, feels concerning. I imagine lots of AI risk work could be like this: doing some research project has some strong chance of advancing capabilities a bit (high probability of a bit of negative value), but maybe a very small chance of massively reducing risk (low probability of tons of positive value). The EV looks good, but my median outcome will be the world being worse than it was if I hadn’t done anything.
Ok, that makes sense. I’d guess butterfly effects would be neutral in the median difference. The same could be the case for indirect effects on wild animals and the far future, although I’d say it’s highly ambiguous (imprecise probabilities) and something to be clueless about, and not precisely neutral about.
Would you say you care about the overall distribution of differences, too, and not just the median and the EV?
Probably, but not sure! Yeah, the above is definitely ignoring cluelessness considerations, on which I don’t have any particularly strong opinion.