“I really don’t know the point at which the arguments for longtermism just stop working because we’ve just used up all of the best targeted opportunities for making the long term go well, such that there’s just no difference between a longtermist argument and just an argument that’s about building a flourishing society in general. Maybe you hit that at 50%, maybe it’s 10%, maybe it’s even 1%. I don’t really know. But given what the world currently prioritises, should we care more about our grandkids and their grandkids and how the course of the next few millennia and millions of years go? Yes. And that’s the claim.”
“One important thing is to distinguish between is something a good thing to do, and is it the best thing to do? The core idea of effective altruism is we want to focus on the very best thing. And I entirely buy that even if you’re just concerned about what happens over the next century, reducing the risks of extinction and other sorts of catastrophes, like reducing the risk of misaligned AI takeover, are just extremely good things to do. And even concerned about the next century, society should be investing a lot more in making sure they don’t happen. Effective altruism is about doing the best we can. And certainly on its face, it would seem extremely suspicious and surprising if the best thing we could do for the very, very long term is also the very best thing we can do for the very short term.”
“So my last donation was to the Lead Elimination Exposure Project …, a new organisation incubated within the effective altruism community, which tries to eliminate lead paint and ultimately lead exposure from all sorts of sources. Lead exposures are really bad. It’s really bad from a health perspective, also lowers people’s IQ, lowers their general cognitive functioning. Some evidence that it kind of increases violence and social dysfunction. … So it seems like they’re really making traction. This is an example of very broad longtermist action, where I think this sort of intervention is maybe kind of different from certain other sorts of global health and development programmes. If I imagine a world where people are a bit smarter, they don’t have mild brain damage from lead exposure that has lowered their IQ and made them more impulsive, more violent, it just broadly seems like a much better society. That was the first argument. And then the second was just I think it’s really good for EAs to be doing things in the world — making it better, achieving concrete wins. … And then the final thing is just that they actually seem to me to be in real need of money and further funding, in a way that lots of the maybe more core, narrowly targeted longtermist work is not currently. So my sense is that a lot of the best giving opportunities are more in the stuff that’s a bit broader, because that really hasn’t been as much of a focus of grantmakers.”
Is it because the second quote is saying EA is about doing the very best thing, and what’s best for the long term is probably not what’s best for the short term, while the third quote is saying funding a very broad, non-lethal health intervention is justifiable from a longtermist basis?
Yes, I think so! It seems like saying: “all the theoretical arguments for long-termism are extremely important because they imply things not implied by other theories” but when asked for the concrete implications the answer is: donating for something non-longtermists would like because it helps people today, while zhe future effects are probably vague.
The following quotes from the current Will McAskill podcast episode of 80,000 hours seem a weird combination to me:
“I really don’t know the point at which the arguments for longtermism just stop working because we’ve just used up all of the best targeted opportunities for making the long term go well, such that there’s just no difference between a longtermist argument and just an argument that’s about building a flourishing society in general. Maybe you hit that at 50%, maybe it’s 10%, maybe it’s even 1%. I don’t really know. But given what the world currently prioritises, should we care more about our grandkids and their grandkids and how the course of the next few millennia and millions of years go? Yes. And that’s the claim.”
“One important thing is to distinguish between is something a good thing to do, and is it the best thing to do? The core idea of effective altruism is we want to focus on the very best thing. And I entirely buy that even if you’re just concerned about what happens over the next century, reducing the risks of extinction and other sorts of catastrophes, like reducing the risk of misaligned AI takeover, are just extremely good things to do. And even concerned about the next century, society should be investing a lot more in making sure they don’t happen. Effective altruism is about doing the best we can. And certainly on its face, it would seem extremely suspicious and surprising if the best thing we could do for the very, very long term is also the very best thing we can do for the very short term.”
“So my last donation was to the Lead Elimination Exposure Project …, a new organisation incubated within the effective altruism community, which tries to eliminate lead paint and ultimately lead exposure from all sorts of sources. Lead exposures are really bad. It’s really bad from a health perspective, also lowers people’s IQ, lowers their general cognitive functioning. Some evidence that it kind of increases violence and social dysfunction. … So it seems like they’re really making traction. This is an example of very broad longtermist action, where I think this sort of intervention is maybe kind of different from certain other sorts of global health and development programmes. If I imagine a world where people are a bit smarter, they don’t have mild brain damage from lead exposure that has lowered their IQ and made them more impulsive, more violent, it just broadly seems like a much better society. That was the first argument. And then the second was just I think it’s really good for EAs to be doing things in the world — making it better, achieving concrete wins. … And then the final thing is just that they actually seem to me to be in real need of money and further funding, in a way that lots of the maybe more core, narrowly targeted longtermist work is not currently. So my sense is that a lot of the best giving opportunities are more in the stuff that’s a bit broader, because that really hasn’t been as much of a focus of grantmakers.”
Is it because the second quote is saying EA is about doing the very best thing, and what’s best for the long term is probably not what’s best for the short term, while the third quote is saying funding a very broad, non-lethal health intervention is justifiable from a longtermist basis?
Yes, I think so! It seems like saying: “all the theoretical arguments for long-termism are extremely important because they imply things not implied by other theories” but when asked for the concrete implications the answer is: donating for something non-longtermists would like because it helps people today, while zhe future effects are probably vague.