Great work! I really appreciate how FP Climateās work is relevant to the broader project of effective altruism, and decision-making under uncertainty. Heuristics like FP Climateās impact multipliers can be modelled, and I am glad you are working towards that.
I wish Open Philanthropy moved towards your approach, at least in the context of global health and wellbeing where there is less uncertainty. Open Philanthropy has a much larger team and moves much more money that FP, so I am surprised with the low level of transparency, and lack of rigorous comparative approaches in its grantmaking.
I think it is hard to judge what exactly OP is doing given that they do not publish everything and probably (and understandably!) also have a significant backlog.
But, directionally, I strongly agree that the lack of comparative methodology in EA is a big problem and I am currently writing a post on this.
I think, to a first approximation, I perceive the situation as follows:
Top-level /ā first encountering a cause:
ITN analysis, inherently comparative and useful when approaching an issue from ignorance (the impact-differentiating features of ITN are very general and make sense as approximations when not knowing much), but often applied in a way below potential (e.g. non-comparable data, no clear formalization of tractability)
Level above specific CEAs:
In GHD, stuff like common discounts for generalizability
In longtermism, maybe some templates or common criteria
A large hole in the middle:
It is my impression that there is a fairly large space of largerly unexplored āmid-levelā methodology and comparative concepts that could much improve relative impact estimates across several domains. These could be within a cause (which is what we are trying to do for climate), but also portable and/āor across cause, e.g. stuff like:
breaking down āneglectednessā into constituent elements such as low-hanging fruits already picked, probability of funding additionality, probability of activity additionality, with different data (or data aggregations) available for either allowing for more precise estimates relatively cheaply, improving on first-cut neglectedness estimates.
what is the multiplier from advocacy and how does this depend on the ratio of philanthropic to societal effort for a problem, the kind of problem (how technical? etc.), and location?
how do we measure organizational strength and how important is it compared to other factors
what returns should we expect from engaging in different regions and what does this depend on?
the value of geopolitical stability as a risk-reducer for many direct risks, etc.
what should we assume about the steerability of technological trajectories, both when we want to accelerate them and when we want to do the opposite.
To me, these questions seem underexploredāmy current hunch is that this is because once ITN is done comparison breaks down into cause-specific siloes and evaluating things for whether they meet a bar, not encouraging overall-comparative-methodology-and-estimate building.
Would be curious for thoughts on whether that seems right.
Thanks for sharing your thought! They seem right to me. A typical argument against āoverall-comparative-methodology-and-estimate buildingā is that the opportunity cost is high, but it seems worth it on the margin given the large sums of money being granted. However, grantmakers have disagreed with this at least implicitly, in the sense the estimation infrastructure is apparently not super developped.
It is not, but I would not see this as revealed preference.
I think itās easy for there to be a relative underinvestment in comparative methodology when most grantmakers and charity evaluators are specialized on specific causes or, at least, work sequentially through different cause-specific questions.
Great work! I really appreciate how FP Climateās work is relevant to the broader project of effective altruism, and decision-making under uncertainty. Heuristics like FP Climateās impact multipliers can be modelled, and I am glad you are working towards that.
I wish Open Philanthropy moved towards your approach, at least in the context of global health and wellbeing where there is less uncertainty. Open Philanthropy has a much larger team and moves much more money that FP, so I am surprised with the low level of transparency, and lack of rigorous comparative approaches in its grantmaking.
Thanks, Vasco!
I think it is hard to judge what exactly OP is doing given that they do not publish everything and probably (and understandably!) also have a significant backlog.
But, directionally, I strongly agree that the lack of comparative methodology in EA is a big problem and I am currently writing a post on this.
I think, to a first approximation, I perceive the situation as follows:
Top-level /ā first encountering a cause:
ITN analysis, inherently comparative and useful when approaching an issue from ignorance (the impact-differentiating features of ITN are very general and make sense as approximations when not knowing much), but often applied in a way below potential (e.g. non-comparable data, no clear formalization of tractability)
Level above specific CEAs:
In GHD, stuff like common discounts for generalizability
In longtermism, maybe some templates or common criteria
A large hole in the middle:
It is my impression that there is a fairly large space of largerly unexplored āmid-levelā methodology and comparative concepts that could much improve relative impact estimates across several domains. These could be within a cause (which is what we are trying to do for climate), but also portable and/āor across cause, e.g. stuff like:
breaking down āneglectednessā into constituent elements such as low-hanging fruits already picked, probability of funding additionality, probability of activity additionality, with different data (or data aggregations) available for either allowing for more precise estimates relatively cheaply, improving on first-cut neglectedness estimates.
what is the multiplier from advocacy and how does this depend on the ratio of philanthropic to societal effort for a problem, the kind of problem (how technical? etc.), and location?
how do we measure organizational strength and how important is it compared to other factors
what returns should we expect from engaging in different regions and what does this depend on?
the value of geopolitical stability as a risk-reducer for many direct risks, etc.
what should we assume about the steerability of technological trajectories, both when we want to accelerate them and when we want to do the opposite.
To me, these questions seem underexploredāmy current hunch is that this is because once ITN is done comparison breaks down into cause-specific siloes and evaluating things for whether they meet a bar, not encouraging overall-comparative-methodology-and-estimate building.
Would be curious for thoughts on whether that seems right.
Thanks for sharing your thought! They seem right to me. A typical argument against āoverall-comparative-methodology-and-estimate buildingā is that the opportunity cost is high, but it seems worth it on the margin given the large sums of money being granted. However, grantmakers have disagreed with this at least implicitly, in the sense the estimation infrastructure is apparently not super developped.
It is not, but I would not see this as revealed preference.
I think itās easy for there to be a relative underinvestment in comparative methodology when most grantmakers and charity evaluators are specialized on specific causes or, at least, work sequentially through different cause-specific questions.