Thanks for this. You’re right that we don’t give an overall theory of how to handle either decision-theoretic or moral uncertainty. The team is only a few months old and the problems you’re raising are hard. So, for now, our aims are just to explore the implications of non-EVM decision theories for cause prioritization and to improve the available tools for thinking about the EV of x-risk mitigation efforts. Down the line—and with additional funding!---we’ll be glad to tackle many additional questions. And, for what it’s worth, we do think that the groundwork we’re laying now will make it easier to develop overall giving portfolios based on people’s best judgments about how to balance the various kinds and degrees of uncertainty.
Sorry to be annoying but after reading the post “Animals of Uncertain Sentience” I am still very confused about the scope of this work
My understanding is that any practical how to make decisions is out of the scope of that post. You are only looking at the question of whether the tools used should in theory be aiming to maximise true EV or not (even in the cases where those tools do not involve calculating EV).
If I am wrong about the above do let me know!
Basically I find phrases like”EV maximization decision procedure” and “using EV maximisation to make these decisions” etc confusing. EV maximisation is a goal that might or might not be best served with a EV calculation based decision procedure, or by a decision procedure that does not involve any EV calculations. I am sorry I know this is persnickety but thought I would flag the things I a finding confusing. I do think being a bit more concise about this would help readers understand the posts.
“The team is only a few months old and the problems you’re raising are hard”
Yes a full and thorough understanding of this topic and rigorous application to cause prioritisation research would be hard.
But for what it’s worth I would expect there are easy some quick wins in this area too. Lots of work has been done outside the EA community just not applied to cause prioritisation decision making, at least that i have noticed so far...
Amazing. Super helpful to hear. Useful to understand what you are currently covering and what you are not covering and what the limits are. I very much hope that you get the funding for more and more research
Thanks for this. You’re right that we don’t give an overall theory of how to handle either decision-theoretic or moral uncertainty. The team is only a few months old and the problems you’re raising are hard. So, for now, our aims are just to explore the implications of non-EVM decision theories for cause prioritization and to improve the available tools for thinking about the EV of x-risk mitigation efforts. Down the line—and with additional funding!---we’ll be glad to tackle many additional questions. And, for what it’s worth, we do think that the groundwork we’re laying now will make it easier to develop overall giving portfolios based on people’s best judgments about how to balance the various kinds and degrees of uncertainty.
Sorry to be annoying but after reading the post “Animals of Uncertain Sentience” I am still very confused about the scope of this work
My understanding is that any practical how to make decisions is out of the scope of that post. You are only looking at the question of whether the tools used should in theory be aiming to maximise true EV or not (even in the cases where those tools do not involve calculating EV).
If I am wrong about the above do let me know!
Basically I find phrases like”EV maximization decision procedure” and “using EV maximisation to make these decisions” etc confusing. EV maximisation is a goal that might or might not be best served with a EV calculation based decision procedure, or by a decision procedure that does not involve any EV calculations. I am sorry I know this is persnickety but thought I would flag the things I a finding confusing. I do think being a bit more concise about this would help readers understand the posts.
Thank you for the work you are doing on this.
“The team is only a few months old and the problems you’re raising are hard”
Yes a full and thorough understanding of this topic and rigorous application to cause prioritisation research would be hard.
But for what it’s worth I would expect there are easy some quick wins in this area too. Lots of work has been done outside the EA community just not applied to cause prioritisation decision making, at least that i have noticed so far...
Amazing. Super helpful to hear. Useful to understand what you are currently covering and what you are not covering and what the limits are. I very much hope that you get the funding for more and more research