Thanks for writing this up! I think you’re raising many interesting points, especially about a greater focus on policy and going “beyond speculation”.
However, I’m more optimistic than you are about the degree of work invested in cause prioritisation, and the ensuing progress we’ve seen over the last years. See this recent comment of mine—I’d be curious if you find those examples convincing.
Also, speaking as someone who is working on this myself, there is quite a bit of research on s-risks and cause prioritisation from a suffering-focused perspective, which is one form of “different views”—though perhaps this is not what you had in mind. (I think it might be good to clarify in more detail what sort of work you want to see, because the term “cause prioritisation research” may mean very different things to different people.)
Hi Tobias, Thank you for the comment. Yes very glad for CLR ect and all the s-risk research.
An interesting thing I noted when reading through your recent comment is that all 3 of the examples of progress involve a broadening of EA, expanding horizons, pushing back on the idea that we need to be focusing right now on AI risk now. They suggest that to date the community has perhaps gone too quickly gone towards a specific case area (AI / immediate x-risk mitigation) rather than continued to explored.
I don’t really know what to make of that. Do you examples weaken the point I am making or strengthen it? Is this evidence that useful research is happening or is this evidence that we as a community under-invests in exploration?
Maybe there is no universal answer to this question and it depends on the individual reader and how your examples affects their current assumptions and priors about the world.
Yeah, I would perhaps say that the community has historically been too narrowly focused on a small number of causes. But I think this has been improving for a while, and we’re now close to the right balance. (There is also a risk of being too broad, by calling too many causes important and not prioritising enough.)
An interesting thing I noted when reading through your recent comment is that all 3 of the examples of progress involve a broadening of EA, expanding horizons, pushing back on the idea that we need to be focusing right now on AI risk now.
The post Tobias was commenting on requested “novel major” insights specifically. This guarantees that the examples provided would be ones that broadened EA, expanded its horizons, and pushed back on whatever priorities EA had before 2015. So I don’t think we should read anything into the fact that a high proportion of the examples were of that kind, rather than e.g. refinements of existing ideas or object-level work within particular cause areas (since the question excluded such things).
(That said, I do think that the number and nature of examples we can come up with in answering that question is relevant to how useful further cause prioritisation research would be. In particular, the fact that commenters came up with some examples rather than 0 examples seems to be evidence that some cause prioritisation research occurred and was useful over the last 5 years. And the fact they came up with relatively few examples is evidence that relatively little such research occurred or was useful. And this could perhaps inform our predictions about the future.)
Thanks for writing this up! I think you’re raising many interesting points, especially about a greater focus on policy and going “beyond speculation”.
However, I’m more optimistic than you are about the degree of work invested in cause prioritisation, and the ensuing progress we’ve seen over the last years. See this recent comment of mine—I’d be curious if you find those examples convincing.
Also, speaking as someone who is working on this myself, there is quite a bit of research on s-risks and cause prioritisation from a suffering-focused perspective, which is one form of “different views”—though perhaps this is not what you had in mind. (I think it might be good to clarify in more detail what sort of work you want to see, because the term “cause prioritisation research” may mean very different things to different people.)
Hi Tobias, Thank you for the comment. Yes very glad for CLR ect and all the s-risk research.
An interesting thing I noted when reading through your recent comment is that all 3 of the examples of progress involve a broadening of EA, expanding horizons, pushing back on the idea that we need to be focusing right now on AI risk now. They suggest that to date the community has perhaps gone too quickly gone towards a specific case area (AI / immediate x-risk mitigation) rather than continued to explored.
I don’t really know what to make of that. Do you examples weaken the point I am making or strengthen it? Is this evidence that useful research is happening or is this evidence that we as a community under-invests in exploration?
Maybe there is no universal answer to this question and it depends on the individual reader and how your examples affects their current assumptions and priors about the world.
Yeah, I would perhaps say that the community has historically been too narrowly focused on a small number of causes. But I think this has been improving for a while, and we’re now close to the right balance. (There is also a risk of being too broad, by calling too many causes important and not prioritising enough.)
The post Tobias was commenting on requested “novel major” insights specifically. This guarantees that the examples provided would be ones that broadened EA, expanded its horizons, and pushed back on whatever priorities EA had before 2015. So I don’t think we should read anything into the fact that a high proportion of the examples were of that kind, rather than e.g. refinements of existing ideas or object-level work within particular cause areas (since the question excluded such things).
(That said, I do think that the number and nature of examples we can come up with in answering that question is relevant to how useful further cause prioritisation research would be. In particular, the fact that commenters came up with some examples rather than 0 examples seems to be evidence that some cause prioritisation research occurred and was useful over the last 5 years. And the fact they came up with relatively few examples is evidence that relatively little such research occurred or was useful. And this could perhaps inform our predictions about the future.)