”Why do we need longtermism? Let’s just do the usual approach of evaluating interventions based on their expected marginal utility per dollar. If the best interventions turn out to be aimed at the short-term or long-term, who cares?”
I do think what we’re doing can be seen as an attempt to approximate the process of evaluating interventions based on everything relevant to their expected marginal utility per dollar.
But we never model anything close to all of reality’s details, so what we focus on, what proxies we use, etc. matters. And it seems usually more productive to “factor out” certain questions like “should we focus on the long-term future or the nearer term?” and “should we focus on humans or nonhumans?”, and have dedicated discussions about them, rather than discussing them in detail within each intervention prioritisation decision or cost-effectiveness model.
“Longtermism” highlights a category of effects that previously received extremely little attention. “Wild animal suffering” is analogous. So the relevant effects would’ve been especially consistently ignored in models if not for these framings/philosophies/cause areas, even if in theory they always “should have been” part of our models.
[I wrote this all quickly; let me know if I should clarify or elaborate on things]
---
Here’s one way to flesh out point 2:
I think (almost?) no one ever has actually taken the approach of trying tomake anything close to a fully fine-grained model the expected marginal utility per dollar of an intervention.
I.e., I think all cost-effectiveness models that have ever been made massively simplify some things, ignore other things, use proxies, etc.
As such, it really matters what “aspects of the world” you’re highlighting as worth modelling in detail, what proxies you use, etc.
E.g., I think GiveWell’s evaluations are basically just based on the next few decades or so (as well as things like room for more funding), and don’t explicitly consider any time beyond that
(Maybe this is a bit wrong, since I haven’t looked closely at GiveWell models for a while, but I think it’s right)
Meanwhile, prioritisation by longtermists focuses mostly on long-term effects, and does less detailed modelling of and places less emphasis on intrinsic effects in the nearer term
Effects in the nearer term that have substantial expected impact on the long-term are (ideally) considered more, of course
Predictably, this leads places like GiveWell to focus more on interventions that seem more likely to be best in the near-term, and places like the EA Long-Term Future Fund to focus more on interventions that seem more likely to be best in the long-term
So whether we’re bought into longtermism seems in theory like it’d make a difference to how we evaluate things and what we end up prioritising, and in practice that also seems to be the case
Here’s another way of fleshing out point 2, copied from a comment I made on a doc where someone essentially proposed evaluating all interventions in terms of WELLBYs:
I’m inclined to think that, for longtermist interventions, the metrics that are usually most useful would be things like percentage or percentage point reduction in x-risks or increase in total value of the future, rather than things like WELLBYs.
I think the core reason is that that allows one to compare many longtermist interventions against each other without explicitly accounting for issues like how large the future will be, what population ethics view one holds, how many biological humans vs whole brain emulations vs artificial sentiences vs nonhuman animals … there’ll be, how much moral weight to assign to each of those types of beings, … Then those issues can just be taken into account for the rarer task of comparing longtermist interventions to other interventions
[Also, my impression is that WELLBYs are currently conceptualised for humans only, right?]
It might be best to have one main metric for each of the main broad cause areas, and then a very rough sense of the exchange rate between those metrics.
Here’s another way to flesh out point 2::
GiveWell benefits from the existence of many scientific fields like epidemiology. And it really makes sense that those fields exist in their own right, and then their relevant conclusions are “plugged in” to GiveWell models or inform high-level decisions about what to bother making models about and how to structure the models, rather than the fields basically existing only “within GiveWell models”.
Likewise, I think it makes sense for there to be communities of people and bodies of work looking into things like how large the future will be, what population ethics view one should hold, how many biological humans vs whole brain emulations vs artificial sentiences vs nonhuman animals … there’ll be, how much moral weight to assign to each of those types of beings, …
And I think it makes sense for that to not just be part of our cost-effectiveness models
All that said:
there may be many models where it makes sense to explicitly model both the intrinsic value of near-term effects and the intrinsic value of long-term effects (e.g., I think I recall that ALLFED does this)
and there may be many models where it makes sense to include parameters for these “cross-cutting uncertainties”, like what population ethics view one should hold, and see how that affects the conclusions
and ultimately I do think that what we’re doing should be seen as an attempt to approximate the process of deciding what to do based on all morally relevant effects, weighted appropriate
So whether we’re bought into longtermism seems in theory like it’d make a difference to how we evaluate things and what we end up prioritising, and in practice that also seems to be the case
It seems backwards to first “buy into” longtermism, and then use that to evaluate interventions. You should instead evaluate longtermist interventions, and use that to decide whether to buy into longtermism.
the metrics that are usually most useful would be things like percentage or percentage point reduction in x-risks or increase in total value of the future, rather than things like WELLBYs. [...] It might be best to have one main metric for each of the main broad cause areas, and then a very rough sense of the exchange rate between those metrics.
This seems fine; if you’re focusing on percentage point reduction in x-risks, you can abstract away from questions about the size of the future, population ethics, etc. But the key is having the exchange rate, which will be a function of those parameters. So you can work on a specific parameter (eg x-risk), which is then plugged back into the exchange rate function.
What’s your take on this argument:
”Why do we need longtermism? Let’s just do the usual approach of evaluating interventions based on their expected marginal utility per dollar. If the best interventions turn out to be aimed at the short-term or long-term, who cares?”
tl;dr:
I do think what we’re doing can be seen as an attempt to approximate the process of evaluating interventions based on everything relevant to their expected marginal utility per dollar.
But we never model anything close to all of reality’s details, so what we focus on, what proxies we use, etc. matters. And it seems usually more productive to “factor out” certain questions like “should we focus on the long-term future or the nearer term?” and “should we focus on humans or nonhumans?”, and have dedicated discussions about them, rather than discussing them in detail within each intervention prioritisation decision or cost-effectiveness model.
“Longtermism” highlights a category of effects that previously received extremely little attention. “Wild animal suffering” is analogous. So the relevant effects would’ve been especially consistently ignored in models if not for these framings/philosophies/cause areas, even if in theory they always “should have been” part of our models.
[I wrote this all quickly; let me know if I should clarify or elaborate on things]
---
Here’s one way to flesh out point 2:
I think (almost?) no one ever has actually taken the approach of trying to make anything close to a fully fine-grained model the expected marginal utility per dollar of an intervention.
I.e., I think all cost-effectiveness models that have ever been made massively simplify some things, ignore other things, use proxies, etc.
As such, it really matters what “aspects of the world” you’re highlighting as worth modelling in detail, what proxies you use, etc.
E.g., I think GiveWell’s evaluations are basically just based on the next few decades or so (as well as things like room for more funding), and don’t explicitly consider any time beyond that
(Maybe this is a bit wrong, since I haven’t looked closely at GiveWell models for a while, but I think it’s right)
Meanwhile, prioritisation by longtermists focuses mostly on long-term effects, and does less detailed modelling of and places less emphasis on intrinsic effects in the nearer term
Effects in the nearer term that have substantial expected impact on the long-term are (ideally) considered more, of course
Predictably, this leads places like GiveWell to focus more on interventions that seem more likely to be best in the near-term, and places like the EA Long-Term Future Fund to focus more on interventions that seem more likely to be best in the long-term
So whether we’re bought into longtermism seems in theory like it’d make a difference to how we evaluate things and what we end up prioritising, and in practice that also seems to be the case
Here’s another way of fleshing out point 2, copied from a comment I made on a doc where someone essentially proposed evaluating all interventions in terms of WELLBYs:
Here’s another way to flesh out point 2::
GiveWell benefits from the existence of many scientific fields like epidemiology. And it really makes sense that those fields exist in their own right, and then their relevant conclusions are “plugged in” to GiveWell models or inform high-level decisions about what to bother making models about and how to structure the models, rather than the fields basically existing only “within GiveWell models”.
Likewise, I think it makes sense for there to be communities of people and bodies of work looking into things like how large the future will be, what population ethics view one should hold, how many biological humans vs whole brain emulations vs artificial sentiences vs nonhuman animals … there’ll be, how much moral weight to assign to each of those types of beings, …
And I think it makes sense for that to not just be part of our cost-effectiveness models
All that said:
there may be many models where it makes sense to explicitly model both the intrinsic value of near-term effects and the intrinsic value of long-term effects (e.g., I think I recall that ALLFED does this)
and there may be many models where it makes sense to include parameters for these “cross-cutting uncertainties”, like what population ethics view one should hold, and see how that affects the conclusions
and ultimately I do think that what we’re doing should be seen as an attempt to approximate the process of deciding what to do based on all morally relevant effects, weighted appropriate
It seems backwards to first “buy into” longtermism, and then use that to evaluate interventions. You should instead evaluate longtermist interventions, and use that to decide whether to buy into longtermism.
This seems fine; if you’re focusing on percentage point reduction in x-risks, you can abstract away from questions about the size of the future, population ethics, etc. But the key is having the exchange rate, which will be a function of those parameters. So you can work on a specific parameter (eg x-risk), which is then plugged back into the exchange rate function.