I am very very excited to see this research it’s the kind of thing that I think EAs should be doing a lot more of and it seems shocking that it takes us more than a decade to get round to such basic fundamental questions on cause prioritisation. Thank you so much for doing this.
I do however have one question and one potential concern.
Question: My understanding from reading the research agenda and plan here is that you are NOT looking into the topic of how best to make decisions under uncertainty (Knightian uncertainty, cluelessness, etc). It looks like you are focusing on resolving the question of WHAT exactly decision making should aim for (e.g. maximise true EV or not) but not the topic of HOW best to make those decisions (e.g. what decision tools to use, to what extent to rely on calculated EV as a tool versus other tools, when practically to satisfice or maximize, etc). It looks like you might touch on the HOW within the specific sub-question of uncertainty over time but not otherwise. Is this a correct reading of your research aims and agenda?
If so, this does puts limits on the conclusions you could draw.
I think that the majority (but by no means all) the people that I know in EA that have a carefully considered view that pushes them to focus on say global health above x-risk issues do so, not because they disagree on the WHAT because they disagree on the HOW. They are not avoiding maximising EV, or non-conseqentalist, or risk averse, they just put less weight on simple EV calculations as a decision tool and the set of tools that they do use to directs them away from x-risk work.
Such a conclusions or models built on just the WHAT question would be of limited use – not just because you need HOW to decide and WHAT too aim* for to make a decision – but specifically it is not hitting what, in my experience, is the primary (although not only) crux of people’s actual disagreement here.
I’d be curious to hear if you agree with this analysis of the limits of the, still very very important, work you are doing.
. * As an aside I actually think in some cases it’s possible to make do with the HOW but not the WHAT but not the other way round. For example you might believe that it has been shown empirically that in deep uncertainty situations a strategy of robust satisficing rather than maximizing allows players to win more war game scenarios or to feel more satisfied with their decision at a later point in time, and therefore believe that adopting such a strategy in situations deep uncertainty is optimal. You could believe this without taking a stance on or knowing whether or not such a strategy maximizes true EV, or is risk averse, etc.
Thanks for this. You’re right that we don’t give an overall theory of how to handle either decision-theoretic or moral uncertainty. The team is only a few months old and the problems you’re raising are hard. So, for now, our aims are just to explore the implications of non-EVM decision theories for cause prioritization and to improve the available tools for thinking about the EV of x-risk mitigation efforts. Down the line—and with additional funding!---we’ll be glad to tackle many additional questions. And, for what it’s worth, we do think that the groundwork we’re laying now will make it easier to develop overall giving portfolios based on people’s best judgments about how to balance the various kinds and degrees of uncertainty.
Sorry to be annoying but after reading the post “Animals of Uncertain Sentience” I am still very confused about the scope of this work
My understanding is that any practical how to make decisions is out of the scope of that post. You are only looking at the question of whether the tools used should in theory be aiming to maximise true EV or not (even in the cases where those tools do not involve calculating EV).
If I am wrong about the above do let me know!
Basically I find phrases like”EV maximization decision procedure” and “using EV maximisation to make these decisions” etc confusing. EV maximisation is a goal that might or might not be best served with a EV calculation based decision procedure, or by a decision procedure that does not involve any EV calculations. I am sorry I know this is persnickety but thought I would flag the things I a finding confusing. I do think being a bit more concise about this would help readers understand the posts.
“The team is only a few months old and the problems you’re raising are hard”
Yes a full and thorough understanding of this topic and rigorous application to cause prioritisation research would be hard.
But for what it’s worth I would expect there are easy some quick wins in this area too. Lots of work has been done outside the EA community just not applied to cause prioritisation decision making, at least that i have noticed so far...
Amazing. Super helpful to hear. Useful to understand what you are currently covering and what you are not covering and what the limits are. I very much hope that you get the funding for more and more research
I am very very excited to see this research it’s the kind of thing that I think EAs should be doing a lot more of and it seems shocking that it takes us more than a decade to get round to such basic fundamental questions on cause prioritisation. Thank you so much for doing this.
I do however have one question and one potential concern.
Question: My understanding from reading the research agenda and plan here is that you are NOT looking into the topic of how best to make decisions under uncertainty (Knightian uncertainty, cluelessness, etc). It looks like you are focusing on resolving the question of WHAT exactly decision making should aim for (e.g. maximise true EV or not) but not the topic of HOW best to make those decisions (e.g. what decision tools to use, to what extent to rely on calculated EV as a tool versus other tools, when practically to satisfice or maximize, etc). It looks like you might touch on the HOW within the specific sub-question of uncertainty over time but not otherwise. Is this a correct reading of your research aims and agenda?
If so, this does puts limits on the conclusions you could draw.
I think that the majority (but by no means all) the people that I know in EA that have a carefully considered view that pushes them to focus on say global health above x-risk issues do so, not because they disagree on the WHAT because they disagree on the HOW. They are not avoiding maximising EV, or non-conseqentalist, or risk averse, they just put less weight on simple EV calculations as a decision tool and the set of tools that they do use to directs them away from x-risk work.
Such a conclusions or models built on just the WHAT question would be of limited use – not just because you need HOW to decide and WHAT too aim* for to make a decision – but specifically it is not hitting what, in my experience, is the primary (although not only) crux of people’s actual disagreement here.
I’d be curious to hear if you agree with this analysis of the limits of the, still very very important, work you are doing.
. * As an aside I actually think in some cases it’s possible to make do with the HOW but not the WHAT but not the other way round. For example you might believe that it has been shown empirically that in deep uncertainty situations a strategy of robust satisficing rather than maximizing allows players to win more war game scenarios or to feel more satisfied with their decision at a later point in time, and therefore believe that adopting such a strategy in situations deep uncertainty is optimal. You could believe this without taking a stance on or knowing whether or not such a strategy maximizes true EV, or is risk averse, etc.
Thanks for this. You’re right that we don’t give an overall theory of how to handle either decision-theoretic or moral uncertainty. The team is only a few months old and the problems you’re raising are hard. So, for now, our aims are just to explore the implications of non-EVM decision theories for cause prioritization and to improve the available tools for thinking about the EV of x-risk mitigation efforts. Down the line—and with additional funding!---we’ll be glad to tackle many additional questions. And, for what it’s worth, we do think that the groundwork we’re laying now will make it easier to develop overall giving portfolios based on people’s best judgments about how to balance the various kinds and degrees of uncertainty.
Sorry to be annoying but after reading the post “Animals of Uncertain Sentience” I am still very confused about the scope of this work
My understanding is that any practical how to make decisions is out of the scope of that post. You are only looking at the question of whether the tools used should in theory be aiming to maximise true EV or not (even in the cases where those tools do not involve calculating EV).
If I am wrong about the above do let me know!
Basically I find phrases like”EV maximization decision procedure” and “using EV maximisation to make these decisions” etc confusing. EV maximisation is a goal that might or might not be best served with a EV calculation based decision procedure, or by a decision procedure that does not involve any EV calculations. I am sorry I know this is persnickety but thought I would flag the things I a finding confusing. I do think being a bit more concise about this would help readers understand the posts.
Thank you for the work you are doing on this.
“The team is only a few months old and the problems you’re raising are hard”
Yes a full and thorough understanding of this topic and rigorous application to cause prioritisation research would be hard.
But for what it’s worth I would expect there are easy some quick wins in this area too. Lots of work has been done outside the EA community just not applied to cause prioritisation decision making, at least that i have noticed so far...
Amazing. Super helpful to hear. Useful to understand what you are currently covering and what you are not covering and what the limits are. I very much hope that you get the funding for more and more research