I do think what weâre doing can be seen as an attempt to approximate the process of evaluating interventions based on everything relevant to their expected marginal utility per dollar.
But we never model anything close to all of realityâs details, so what we focus on, what proxies we use, etc. matters. And it seems usually more productive to âfactor outâ certain questions like âshould we focus on the long-term future or the nearer term?â and âshould we focus on humans or nonhumans?â, and have dedicated discussions about them, rather than discussing them in detail within each intervention prioritisation decision or cost-effectiveness model.
âLongtermismâ highlights a category of effects that previously received extremely little attention. âWild animal sufferingâ is analogous. So the relevant effects wouldâve been especially consistently ignored in models if not for these framings/âphilosophies/âcause areas, even if in theory they always âshould have beenâ part of our models.
[I wrote this all quickly; let me know if I should clarify or elaborate on things]
---
Hereâs one way to flesh out point 2:
I think (almost?) no one ever has actually taken the approach of trying tomake anything close to a fully fine-grained model the expected marginal utility per dollar of an intervention.
I.e., I think all cost-effectiveness models that have ever been made massively simplify some things, ignore other things, use proxies, etc.
As such, it really matters what âaspects of the worldâ youâre highlighting as worth modelling in detail, what proxies you use, etc.
E.g., I think GiveWellâs evaluations are basically just based on the next few decades or so (as well as things like room for more funding), and donât explicitly consider any time beyond that
(Maybe this is a bit wrong, since I havenât looked closely at GiveWell models for a while, but I think itâs right)
Meanwhile, prioritisation by longtermists focuses mostly on long-term effects, and does less detailed modelling of and places less emphasis on intrinsic effects in the nearer term
Effects in the nearer term that have substantial expected impact on the long-term are (ideally) considered more, of course
Predictably, this leads places like GiveWell to focus more on interventions that seem more likely to be best in the near-term, and places like the EA Long-Term Future Fund to focus more on interventions that seem more likely to be best in the long-term
So whether weâre bought into longtermism seems in theory like itâd make a difference to how we evaluate things and what we end up prioritising, and in practice that also seems to be the case
Hereâs another way of fleshing out point 2, copied from a comment I made on a doc where someone essentially proposed evaluating all interventions in terms of WELLBYs:
Iâm inclined to think that, for longtermist interventions, the metrics that are usually most useful would be things like percentage or percentage point reduction in x-risks or increase in total value of the future, rather than things like WELLBYs.
I think the core reason is that that allows one to compare many longtermist interventions against each other without explicitly accounting for issues like how large the future will be, what population ethics view one holds, how many biological humans vs whole brain emulations vs artificial sentiences vs nonhuman animals ⌠thereâll be, how much moral weight to assign to each of those types of beings, ⌠Then those issues can just be taken into account for the rarer task of comparing longtermist interventions to other interventions
[Also, my impression is that WELLBYs are currently conceptualised for humans only, right?]
It might be best to have one main metric for each of the main broad cause areas, and then a very rough sense of the exchange rate between those metrics.
Hereâs another way to flesh out point 2::
GiveWell benefits from the existence of many scientific fields like epidemiology. And it really makes sense that those fields exist in their own right, and then their relevant conclusions are âplugged inâ to GiveWell models or inform high-level decisions about what to bother making models about and how to structure the models, rather than the fields basically existing only âwithin GiveWell modelsâ.
Likewise, I think it makes sense for there to be communities of people and bodies of work looking into things like how large the future will be, what population ethics view one should hold, how many biological humans vs whole brain emulations vs artificial sentiences vs nonhuman animals ⌠thereâll be, how much moral weight to assign to each of those types of beings, âŚ
And I think it makes sense for that to not just be part of our cost-effectiveness models
All that said:
there may be many models where it makes sense to explicitly model both the intrinsic value of near-term effects and the intrinsic value of long-term effects (e.g., I think I recall that ALLFED does this)
and there may be many models where it makes sense to include parameters for these âcross-cutting uncertaintiesâ, like what population ethics view one should hold, and see how that affects the conclusions
and ultimately I do think that what weâre doing should be seen as an attempt to approximate the process of deciding what to do based on all morally relevant effects, weighted appropriate
So whether weâre bought into longtermism seems in theory like itâd make a difference to how we evaluate things and what we end up prioritising, and in practice that also seems to be the case
It seems backwards to first âbuy intoâ longtermism, and then use that to evaluate interventions. You should instead evaluate longtermist interventions, and use that to decide whether to buy into longtermism.
the metrics that are usually most useful would be things like percentage or percentage point reduction in x-risks or increase in total value of the future, rather than things like WELLBYs. [...] It might be best to have one main metric for each of the main broad cause areas, and then a very rough sense of the exchange rate between those metrics.
This seems fine; if youâre focusing on percentage point reduction in x-risks, you can abstract away from questions about the size of the future, population ethics, etc. But the key is having the exchange rate, which will be a function of those parameters. So you can work on a specific parameter (eg x-risk), which is then plugged back into the exchange rate function.
tl;dr:
I do think what weâre doing can be seen as an attempt to approximate the process of evaluating interventions based on everything relevant to their expected marginal utility per dollar.
But we never model anything close to all of realityâs details, so what we focus on, what proxies we use, etc. matters. And it seems usually more productive to âfactor outâ certain questions like âshould we focus on the long-term future or the nearer term?â and âshould we focus on humans or nonhumans?â, and have dedicated discussions about them, rather than discussing them in detail within each intervention prioritisation decision or cost-effectiveness model.
âLongtermismâ highlights a category of effects that previously received extremely little attention. âWild animal sufferingâ is analogous. So the relevant effects wouldâve been especially consistently ignored in models if not for these framings/âphilosophies/âcause areas, even if in theory they always âshould have beenâ part of our models.
[I wrote this all quickly; let me know if I should clarify or elaborate on things]
---
Hereâs one way to flesh out point 2:
I think (almost?) no one ever has actually taken the approach of trying to make anything close to a fully fine-grained model the expected marginal utility per dollar of an intervention.
I.e., I think all cost-effectiveness models that have ever been made massively simplify some things, ignore other things, use proxies, etc.
As such, it really matters what âaspects of the worldâ youâre highlighting as worth modelling in detail, what proxies you use, etc.
E.g., I think GiveWellâs evaluations are basically just based on the next few decades or so (as well as things like room for more funding), and donât explicitly consider any time beyond that
(Maybe this is a bit wrong, since I havenât looked closely at GiveWell models for a while, but I think itâs right)
Meanwhile, prioritisation by longtermists focuses mostly on long-term effects, and does less detailed modelling of and places less emphasis on intrinsic effects in the nearer term
Effects in the nearer term that have substantial expected impact on the long-term are (ideally) considered more, of course
Predictably, this leads places like GiveWell to focus more on interventions that seem more likely to be best in the near-term, and places like the EA Long-Term Future Fund to focus more on interventions that seem more likely to be best in the long-term
So whether weâre bought into longtermism seems in theory like itâd make a difference to how we evaluate things and what we end up prioritising, and in practice that also seems to be the case
Hereâs another way of fleshing out point 2, copied from a comment I made on a doc where someone essentially proposed evaluating all interventions in terms of WELLBYs:
Hereâs another way to flesh out point 2::
GiveWell benefits from the existence of many scientific fields like epidemiology. And it really makes sense that those fields exist in their own right, and then their relevant conclusions are âplugged inâ to GiveWell models or inform high-level decisions about what to bother making models about and how to structure the models, rather than the fields basically existing only âwithin GiveWell modelsâ.
Likewise, I think it makes sense for there to be communities of people and bodies of work looking into things like how large the future will be, what population ethics view one should hold, how many biological humans vs whole brain emulations vs artificial sentiences vs nonhuman animals ⌠thereâll be, how much moral weight to assign to each of those types of beings, âŚ
And I think it makes sense for that to not just be part of our cost-effectiveness models
All that said:
there may be many models where it makes sense to explicitly model both the intrinsic value of near-term effects and the intrinsic value of long-term effects (e.g., I think I recall that ALLFED does this)
and there may be many models where it makes sense to include parameters for these âcross-cutting uncertaintiesâ, like what population ethics view one should hold, and see how that affects the conclusions
and ultimately I do think that what weâre doing should be seen as an attempt to approximate the process of deciding what to do based on all morally relevant effects, weighted appropriate
It seems backwards to first âbuy intoâ longtermism, and then use that to evaluate interventions. You should instead evaluate longtermist interventions, and use that to decide whether to buy into longtermism.
This seems fine; if youâre focusing on percentage point reduction in x-risks, you can abstract away from questions about the size of the future, population ethics, etc. But the key is having the exchange rate, which will be a function of those parameters. So you can work on a specific parameter (eg x-risk), which is then plugged back into the exchange rate function.