What do you believe that seems important and that you think most EAs would disagree with you about?
What do you believe that seems important and that you think most people working on improving institutional (or other) decision-making would disagree with you about?
What do you think EAs are most often, most significantly, or most annoyingly wrong about?
Iām perhaps particularly interested in ways in which you think longtermists are often wrong about politics, policy, and/āor institutional decision-making.
Whatās an important way your own views/ābeliefs have changed recently?
(Iām perhaps most interested in your independent impression, before updating on othersā views.)
Iām on record as believing that working on EA-style optimization within causes, even ones that donāt rise to the top of the most important causes to work on, is EA work that should be recognized as such and welcomed into the community. I got a lot of pushback when I published that post over four years ago, although Iāve since seen a number of people make similar arguments. I think EA conventional wisdom sometimes sets up a rather unrealistic, black-and-white understanding of why other people engage in altruistic acts: itās either 100% altruistic, in which case it goes into the EA bucket and you should try to optimize it, or itās not altruistic at all, which case itās out of scope for us and we donāt need to talk about it. In reality, I think many people pursue both donations and careers out of a combination of altruistic and selfish factors, and finding ways to engage productively about increasing the impact of the altruism while respecting the boundaries put in place by self-interest is a relatively unexplored frontier for this community that has the potential to be very, very productive.
This depends on whether you center your perspective on the EA community or not. There are lots of folks out there in the wider world trying to improve the functioning of institutions, but most of them arenāt making any explicit attempt to prioritize among them beyond whether they are primarily mission- or profit-driven. In this respect, the EA communityās drive to prioritize IIDM work based on opportunity to improve the world is quite novel and even a bit radical. On the EA side of things, however, I think thereās not enough recognition of the value that comes from engaging with fellow travelers who have been doing this kind of work for a lot longer, just without the prioritization that EA brings to the table. IIDM is an incredibly interdisciplinary field, and one of the failure modes that I see a lot is that good ideas gain traction within a short period of time among some subset of the professional universe, and then get more or less confined to that subset over time. I think EAās version of IIDM is in danger of meeting the same fate if we donāt very aggressively try to bridge across sectoral, country, and disciplinary boundaries where people are using different language to talk about/ātry to do the same kinds of things.
My main discomfort with longtermism has long been that thereās something that feels kind of imperialist, or at least foolish, about trying to determine outcomes for a far future that we know almost nothing about. Much of IIDM work involves trying to get explicit about oneās uncertainty, but the forecasting literature suggests that we donāt have a very good language or tools for precisely estimating very improbable events. To be clear, I have no issue with longtermist work that attacks āknown unknownsāārisks from AI, nuclear war, etc. are all pretty concrete even in the time horizon of our own lives. But if someoneās case for the importance of something relies on imagining what life will be like more than a few generations from now, Iām generally going to be pretty skeptical that itās more valuable than bednets.
My own career direction has shifted pretty radically over the past five years, and EA-style thinking has had a lot to do with that. Even though I stand by my position in point #1 that cause neutrality shouldnāt be a prerequisite for engaging in EA, I have personally found that embracing cause neutrality was very empowering for me and I now wish I had done it sooner. Itās something I hope to write more about in the future.
Thanks for these answers. I think I find your answer to Q2 particularly interesting. (FWIW, I also think I probably have a different perspective to your re your answer to Q1, but I imagine any quick response from me would probably just rehash old debates.)
But if someoneās case for the importance of something relies on imagining what life will be like more than a few generations from now, Iām generally going to be pretty skeptical that itās more valuable than bednets.
Would you include even cases that rely on things like believing thereās a non-trivial chance of at least ~10 billion humans per generation for some specified number of generations, with a similar or greater average wellbeing than the current average wellbeing? Or cases that rely on a bunch of more specific features of the future, like what kind of political systems, technologies, and economic systems theyāll have?
My main discomfort with longtermism has long been that thereās something that feels kind of imperialist, or at least foolish, about trying to determine outcomes for a far future that we know almost nothing about. [...] To be clear, I have no issue with longtermist work that attacks āknown unknownsāārisks from AI, nuclear war, etc. are all pretty concrete even in the time horizon of our own lives.
How do you feel about longtermist work that specifically aims at one of the following?
Identifying unknown unknownsāe.g. through horizon-scanning
Setting ourselves up to be maximally robust to both known and unknown unknownsāe.g. through generically improving knowledge, improving decision-making, improving societyās ability to adapt and coordinate (perhaps via things like improving global governance while preventing stable authoritarianism)
I think efforts to ensure we can have a long reflection could be seen as part of this
Improving our ability to do 1 and/āor 2 - e.g., through improving our forecasting and scenario planning abilities
Would you include even cases that rely on things like believing thereās a non-trivial chance of at least ~10 billion humans per generation for some specified number of generations, with a similar or greater average wellbeing than the current average wellbeing? Or cases that rely on a bunch of more specific features of the future, like what kind of political systems, technologies, and economic systems theyāll have?
My general intuition is that if thereās a strong case that some action today is going to make a huge difference for humanity dozens or hundreds of generations into the future, that case is still going to be pretty strong if we limit our horizon to the next 100 years or so. Aside from technologies to prevent an asteroid from hitting the earth and similarly super-rare cataclysmic natural events, Iām hard pressed to think of examples of things that are obviously worth working on that donāt meet that test. But Iām happy to be further educated on this subject.
How do you feel about longtermist work that specifically aims at one of the following?
Yeah, that sort of āanti-fragileā approach to longtermism strikes me as completely reasonable, and obviously it has clear connections to the IIDM cause area as well.
My general intuition is that if thereās a strong case that some action today is going to make a huge difference for humanity dozens or hundreds of generations into the future, that case is still going to be pretty strong if we limit our horizon to the next 100 years or so.
I might be misunderstanding you here, so apologies if the rest of this comment is talking past you. But I think the really key point for me is simply that, the ālargerā and ābetterā the future would be if we get things right,[1] the more important it is to get things right. (This also requires a few moral assumptions, e.g. that wellbeing matters equally whenever it happens.)
To take it to the extreme, if we knew with certainty that extinction was absolutely guaranteed in 100 years, then that massively reduces the value of reducing extinction risk before that time. On the other extreme, if we knew with certainty that if we reduce AI risk in the next 100 years, the future will last 1 trillion years, contain 1 trillion sentient creatures per year, and they will all be very happy, free, aesthetically stimulated, having interesting experiences, etc., then that makes reducing AI risk extremely important.
A similar point can also apply with negative futures. If thereās a non-trivial chance that some risk would result in a net negative future, then knowing how long that will last, how many beings would be in it, and how negative it is for those beings is relevant to how bad that outcome would be.
Most of the benefits of avoiding extinction or other negative lock-ins accrue more than 100 years from now, whereas (Iād argue) most of the predictable benefits of things like bednet distribution accrue within the next 100 years. So the relative priority of the two broad intervention categories could depend on how ālargeā and āgoodā the future would be if we avoid negative lock-ins. And that depends on having at least some guesses about the world more than 100 years from now (though they could be low-confidence and big-picture, rather than anything very confident or precise).[1]
So I guess Iām wondering whether youāre uncomfortable with, or inclined to dismiss, even those sorts of low-confidence, big-picture guesses, or just the more confident and precise guesses?
(Btw, I think the paper The Case for Strong Longtermism is very good, and it makes the sort of argument Iām making much more rigorously than Iām making it here, so that could be worth checking out.)
[1] If weāre total utilitarians, we could perhaps interpret ālargerā and ābetterā as a matter of how long civilization or whatever lasts, how many beings there are per unit of time during that period, and how high their average wellbeing is. But I think the same basic point stands given other precise views and operationalisations.
[2] Put another way, I think I do expect that most things that are top priorities for their impact >100 years from now will also be much better in terms of their impact in the next 100 years than random selfish uses of resources would be. (And this will tend to be because the risks might occur in the next 100 years, or because things that help us deal with the risks also help us deal with other things.) But I donāt necessarily expect them to be better than things like bednet distribution, which have been selected specifically for their high near-term impact.
What do you believe that seems important and that you think most EAs would disagree with you about?
What do you believe that seems important and that you think most people working on improving institutional (or other) decision-making would disagree with you about?
What do you think EAs are most often, most significantly, or most annoyingly wrong about?
Iām perhaps particularly interested in ways in which you think longtermists are often wrong about politics, policy, and/āor institutional decision-making.
Whatās an important way your own views/ābeliefs have changed recently?
(Iām perhaps most interested in your independent impression, before updating on othersā views.)
Great questions!
Iām on record as believing that working on EA-style optimization within causes, even ones that donāt rise to the top of the most important causes to work on, is EA work that should be recognized as such and welcomed into the community. I got a lot of pushback when I published that post over four years ago, although Iāve since seen a number of people make similar arguments. I think EA conventional wisdom sometimes sets up a rather unrealistic, black-and-white understanding of why other people engage in altruistic acts: itās either 100% altruistic, in which case it goes into the EA bucket and you should try to optimize it, or itās not altruistic at all, which case itās out of scope for us and we donāt need to talk about it. In reality, I think many people pursue both donations and careers out of a combination of altruistic and selfish factors, and finding ways to engage productively about increasing the impact of the altruism while respecting the boundaries put in place by self-interest is a relatively unexplored frontier for this community that has the potential to be very, very productive.
This depends on whether you center your perspective on the EA community or not. There are lots of folks out there in the wider world trying to improve the functioning of institutions, but most of them arenāt making any explicit attempt to prioritize among them beyond whether they are primarily mission- or profit-driven. In this respect, the EA communityās drive to prioritize IIDM work based on opportunity to improve the world is quite novel and even a bit radical. On the EA side of things, however, I think thereās not enough recognition of the value that comes from engaging with fellow travelers who have been doing this kind of work for a lot longer, just without the prioritization that EA brings to the table. IIDM is an incredibly interdisciplinary field, and one of the failure modes that I see a lot is that good ideas gain traction within a short period of time among some subset of the professional universe, and then get more or less confined to that subset over time. I think EAās version of IIDM is in danger of meeting the same fate if we donāt very aggressively try to bridge across sectoral, country, and disciplinary boundaries where people are using different language to talk about/ātry to do the same kinds of things.
My main discomfort with longtermism has long been that thereās something that feels kind of imperialist, or at least foolish, about trying to determine outcomes for a far future that we know almost nothing about. Much of IIDM work involves trying to get explicit about oneās uncertainty, but the forecasting literature suggests that we donāt have a very good language or tools for precisely estimating very improbable events. To be clear, I have no issue with longtermist work that attacks āknown unknownsāārisks from AI, nuclear war, etc. are all pretty concrete even in the time horizon of our own lives. But if someoneās case for the importance of something relies on imagining what life will be like more than a few generations from now, Iām generally going to be pretty skeptical that itās more valuable than bednets.
My own career direction has shifted pretty radically over the past five years, and EA-style thinking has had a lot to do with that. Even though I stand by my position in point #1 that cause neutrality shouldnāt be a prerequisite for engaging in EA, I have personally found that embracing cause neutrality was very empowering for me and I now wish I had done it sooner. Itās something I hope to write more about in the future.
Thanks for these answers. I think I find your answer to Q2 particularly interesting. (FWIW, I also think I probably have a different perspective to your re your answer to Q1, but I imagine any quick response from me would probably just rehash old debates.)
Would you include even cases that rely on things like believing thereās a non-trivial chance of at least ~10 billion humans per generation for some specified number of generations, with a similar or greater average wellbeing than the current average wellbeing? Or cases that rely on a bunch of more specific features of the future, like what kind of political systems, technologies, and economic systems theyāll have?
How do you feel about longtermist work that specifically aims at one of the following?
Identifying unknown unknownsāe.g. through horizon-scanning
Setting ourselves up to be maximally robust to both known and unknown unknownsāe.g. through generically improving knowledge, improving decision-making, improving societyās ability to adapt and coordinate (perhaps via things like improving global governance while preventing stable authoritarianism)
I think efforts to ensure we can have a long reflection could be seen as part of this
Improving our ability to do 1 and/āor 2 - e.g., through improving our forecasting and scenario planning abilities
My general intuition is that if thereās a strong case that some action today is going to make a huge difference for humanity dozens or hundreds of generations into the future, that case is still going to be pretty strong if we limit our horizon to the next 100 years or so. Aside from technologies to prevent an asteroid from hitting the earth and similarly super-rare cataclysmic natural events, Iām hard pressed to think of examples of things that are obviously worth working on that donāt meet that test. But Iām happy to be further educated on this subject.
Yeah, that sort of āanti-fragileā approach to longtermism strikes me as completely reasonable, and obviously it has clear connections to the IIDM cause area as well.
I might be misunderstanding you here, so apologies if the rest of this comment is talking past you. But I think the really key point for me is simply that, the ālargerā and ābetterā the future would be if we get things right,[1] the more important it is to get things right. (This also requires a few moral assumptions, e.g. that wellbeing matters equally whenever it happens.)
To take it to the extreme, if we knew with certainty that extinction was absolutely guaranteed in 100 years, then that massively reduces the value of reducing extinction risk before that time. On the other extreme, if we knew with certainty that if we reduce AI risk in the next 100 years, the future will last 1 trillion years, contain 1 trillion sentient creatures per year, and they will all be very happy, free, aesthetically stimulated, having interesting experiences, etc., then that makes reducing AI risk extremely important.
A similar point can also apply with negative futures. If thereās a non-trivial chance that some risk would result in a net negative future, then knowing how long that will last, how many beings would be in it, and how negative it is for those beings is relevant to how bad that outcome would be.
Most of the benefits of avoiding extinction or other negative lock-ins accrue more than 100 years from now, whereas (Iād argue) most of the predictable benefits of things like bednet distribution accrue within the next 100 years. So the relative priority of the two broad intervention categories could depend on how ālargeā and āgoodā the future would be if we avoid negative lock-ins. And that depends on having at least some guesses about the world more than 100 years from now (though they could be low-confidence and big-picture, rather than anything very confident or precise).[1]
So I guess Iām wondering whether youāre uncomfortable with, or inclined to dismiss, even those sorts of low-confidence, big-picture guesses, or just the more confident and precise guesses?
(Btw, I think the paper The Case for Strong Longtermism is very good, and it makes the sort of argument Iām making much more rigorously than Iām making it here, so that could be worth checking out.)
[1] If weāre total utilitarians, we could perhaps interpret ālargerā and ābetterā as a matter of how long civilization or whatever lasts, how many beings there are per unit of time during that period, and how high their average wellbeing is. But I think the same basic point stands given other precise views and operationalisations.
[2] Put another way, I think I do expect that most things that are top priorities for their impact >100 years from now will also be much better in terms of their impact in the next 100 years than random selfish uses of resources would be. (And this will tend to be because the risks might occur in the next 100 years, or because things that help us deal with the risks also help us deal with other things.) But I donāt necessarily expect them to be better than things like bednet distribution, which have been selected specifically for their high near-term impact.