Thanks for the write up. I think itās important to have specified methods for conducting any research, and this post does so clearly (at least as clearly as possible in this abstract work).
Have you looked at the literature on similar/āanalogous prioritisation research?
I intuitively think that the hard work will be in modelling causality. Have you done any work on that component?
(My views, not Convergenceās, as with most of my comments)
I think Iād personally also suspect that modelling causality might be the hardest step. But Iād also suspect mapping the space to perhaps seem easier than it really is, in a sense, because itās quite easy to map part of the space, and perhaps quite easy to then overestimate how much of the space you mapped, and underestimate the odds that youāve failed to even notice some very important (clusters of) possible interventions or consequences. This is related to ideas like unknown unknown and Bostromās crucial considerations.
I think some of the biggest wins of EA have probably been from mapping (maybe particularly of consequences, rather than interventions), more than from modelling. In particular, I have in mind identifying that it may be worth considering things like the wellbeing of people millions of years from now, astronomical waste, AI safety, global famine in a nuclear or impact winter (Iām thinking of ALLFEDās work), and the welfare of wild animals. I think all of these things were noticed by some people before EAs got to them, but that they were mostly ignored by the vast majority of people.
In fact, I think one of the main reasons why āmodellingā in the more standard sense is often so hard to do well is that there are many possible unknown unknowns like those things I listed, which could totally change the results of oneās model but which one hasnāt even thought to account for. So with this framework separating out mapping the space and modelling causality, the modelling causality step itself may be less difficult than one would normally expect āmodellingā to be (though definitely still difficult).
(Prioritizing between strategies seems to me the simplest step. And constructing strategies seems like a step where it might be hard to get the optimal results, but where the gap between optimal and ok results wonāt be huge in the same way it might be for the mapping and modelling stages.)
Iām less sure what specifically youāre interested in with your two questions. Also, I didnāt come up with the framework in this post (my role was more to refine and communicate it), so my own knowledge and views werenāt pivotal in forming it. But I can say that it seems to me that this framework doesnāt clash with, and can be complementary to, various ideas, prioritisation processes, and modelling processes from groups like 80k, GPI, GiveWell, and Charity Entrepreneurship. And I believe that Convergence will soon be releasing various other work related to things like frameworks and tools for modelling causality and (I think) prioritisation research.
Thanks for the write up. I think itās important to have specified methods for conducting any research, and this post does so clearly (at least as clearly as possible in this abstract work).
Have you looked at the literature on similar/āanalogous prioritisation research?
I intuitively think that the hard work will be in modelling causality. Have you done any work on that component?
Thanks!
(My views, not Convergenceās, as with most of my comments)
I think Iād personally also suspect that modelling causality might be the hardest step. But Iād also suspect mapping the space to perhaps seem easier than it really is, in a sense, because itās quite easy to map part of the space, and perhaps quite easy to then overestimate how much of the space you mapped, and underestimate the odds that youāve failed to even notice some very important (clusters of) possible interventions or consequences. This is related to ideas like unknown unknown and Bostromās crucial considerations.
I think some of the biggest wins of EA have probably been from mapping (maybe particularly of consequences, rather than interventions), more than from modelling. In particular, I have in mind identifying that it may be worth considering things like the wellbeing of people millions of years from now, astronomical waste, AI safety, global famine in a nuclear or impact winter (Iām thinking of ALLFEDās work), and the welfare of wild animals. I think all of these things were noticed by some people before EAs got to them, but that they were mostly ignored by the vast majority of people.
In fact, I think one of the main reasons why āmodellingā in the more standard sense is often so hard to do well is that there are many possible unknown unknowns like those things I listed, which could totally change the results of oneās model but which one hasnāt even thought to account for. So with this framework separating out mapping the space and modelling causality, the modelling causality step itself may be less difficult than one would normally expect āmodellingā to be (though definitely still difficult).
(Prioritizing between strategies seems to me the simplest step. And constructing strategies seems like a step where it might be hard to get the optimal results, but where the gap between optimal and ok results wonāt be huge in the same way it might be for the mapping and modelling stages.)
Iām less sure what specifically youāre interested in with your two questions. Also, I didnāt come up with the framework in this post (my role was more to refine and communicate it), so my own knowledge and views werenāt pivotal in forming it. But I can say that it seems to me that this framework doesnāt clash with, and can be complementary to, various ideas, prioritisation processes, and modelling processes from groups like 80k, GPI, GiveWell, and Charity Entrepreneurship. And I believe that Convergence will soon be releasing various other work related to things like frameworks and tools for modelling causality and (I think) prioritisation research.