Thanks for the write up. I think it’s important to have specified methods for conducting any research, and this post does so clearly (at least as clearly as possible in this abstract work).
Have you looked at the literature on similar/analogous prioritisation research?
I intuitively think that the hard work will be in modelling causality. Have you done any work on that component?
(My views, not Convergence’s, as with most of my comments)
I think I’d personally also suspect that modelling causality might be the hardest step. But I’d also suspect mapping the space to perhaps seem easier than it really is, in a sense, because it’s quite easy to map part of the space, and perhaps quite easy to then overestimate how much of the space you mapped, and underestimate the odds that you’ve failed to even notice some very important (clusters of) possible interventions or consequences. This is related to ideas like unknown unknown and Bostrom’s crucial considerations.
I think some of the biggest wins of EA have probably been from mapping (maybe particularly of consequences, rather than interventions), more than from modelling. In particular, I have in mind identifying that it may be worth considering things like the wellbeing of people millions of years from now, astronomical waste, AI safety, global famine in a nuclear or impact winter (I’m thinking of ALLFED’s work), and the welfare of wild animals. I think all of these things were noticed by some people before EAs got to them, but that they were mostly ignored by the vast majority of people.
In fact, I think one of the main reasons why “modelling” in the more standard sense is often so hard to do well is that there are many possible unknown unknowns like those things I listed, which could totally change the results of one’s model but which one hasn’t even thought to account for. So with this framework separating out mapping the space and modelling causality, the modelling causality step itself may be less difficult than one would normally expect “modelling” to be (though definitely still difficult).
(Prioritizing between strategies seems to me the simplest step. And constructing strategies seems like a step where it might be hard to get the optimal results, but where the gap between optimal and ok results won’t be huge in the same way it might be for the mapping and modelling stages.)
I’m less sure what specifically you’re interested in with your two questions. Also, I didn’t come up with the framework in this post (my role was more to refine and communicate it), so my own knowledge and views weren’t pivotal in forming it. But I can say that it seems to me that this framework doesn’t clash with, and can be complementary to, various ideas, prioritisation processes, and modelling processes from groups like 80k, GPI, GiveWell, and Charity Entrepreneurship. And I believe that Convergence will soon be releasing various other work related to things like frameworks and tools for modelling causality and (I think) prioritisation research.
Thanks for the write up. I think it’s important to have specified methods for conducting any research, and this post does so clearly (at least as clearly as possible in this abstract work).
Have you looked at the literature on similar/analogous prioritisation research?
I intuitively think that the hard work will be in modelling causality. Have you done any work on that component?
Thanks!
(My views, not Convergence’s, as with most of my comments)
I think I’d personally also suspect that modelling causality might be the hardest step. But I’d also suspect mapping the space to perhaps seem easier than it really is, in a sense, because it’s quite easy to map part of the space, and perhaps quite easy to then overestimate how much of the space you mapped, and underestimate the odds that you’ve failed to even notice some very important (clusters of) possible interventions or consequences. This is related to ideas like unknown unknown and Bostrom’s crucial considerations.
I think some of the biggest wins of EA have probably been from mapping (maybe particularly of consequences, rather than interventions), more than from modelling. In particular, I have in mind identifying that it may be worth considering things like the wellbeing of people millions of years from now, astronomical waste, AI safety, global famine in a nuclear or impact winter (I’m thinking of ALLFED’s work), and the welfare of wild animals. I think all of these things were noticed by some people before EAs got to them, but that they were mostly ignored by the vast majority of people.
In fact, I think one of the main reasons why “modelling” in the more standard sense is often so hard to do well is that there are many possible unknown unknowns like those things I listed, which could totally change the results of one’s model but which one hasn’t even thought to account for. So with this framework separating out mapping the space and modelling causality, the modelling causality step itself may be less difficult than one would normally expect “modelling” to be (though definitely still difficult).
(Prioritizing between strategies seems to me the simplest step. And constructing strategies seems like a step where it might be hard to get the optimal results, but where the gap between optimal and ok results won’t be huge in the same way it might be for the mapping and modelling stages.)
I’m less sure what specifically you’re interested in with your two questions. Also, I didn’t come up with the framework in this post (my role was more to refine and communicate it), so my own knowledge and views weren’t pivotal in forming it. But I can say that it seems to me that this framework doesn’t clash with, and can be complementary to, various ideas, prioritisation processes, and modelling processes from groups like 80k, GPI, GiveWell, and Charity Entrepreneurship. And I believe that Convergence will soon be releasing various other work related to things like frameworks and tools for modelling causality and (I think) prioritisation research.