Surely most neartermist funders think that the probability that we get transformative AGI this century is low enough that it doesn’t have a big impact on calculations like the ones you describe?
There are a couple views by which neartermism is still worthwhile even if there’s a large chance (like 50%) that we get AGI soon—maybe you think neartermism is useful as a means to build the capacity and reputation of EA (so that it can ultimately make AI safety progress), or maybe you think that AGI is a huge problem but there’s absolutely nothing we can do about it. But these views are kinda shaky IMO.
Surely most neartermist funders think that the probability that we get transformative AGI this century is low enough that it doesn’t have a big impact on calculations like the ones you describe?
This is not obvious to me; I’d guess that for a lot of people, AGI and global health live in separate magisteria, or they’re not working on alignment due to perceived tractability.
Surely most neartermist funders think that the probability that we get transformative AGI this century is low enough that it doesn’t have a big impact on calculations like the ones you describe?
I agree with Thomas Kwa on this
There are a couple views by which neartermism is still worthwhile even if there’s a large chance (like 50%) that we get AGI soon -- …
I think neartermist causes are worthwhile in their own right, but think some interventions are less exciting when (in my mind) most of the benefits are on track to come after AGI.
The idea that a neartermist funder becomes convinced that world-transformative AGI is right around the corner, and then takes action by dumping all their money into fast-acting welfare enhancements, instead of trying to prepare for or influence the immense changes that will shortly occur, almost seems like parody
Fair enough. My prediction is that the idea will become more palatable over time as we get closer to AGI in the next few years. Even if there is a small chance we have the opportunity to do this, I think it could be worthwhile to think further given the amount of money earmarked for spending on neartermist causes.
Surely most neartermist funders think that the probability that we get transformative AGI this century is low enough that it doesn’t have a big impact on calculations like the ones you describe?
There are a couple views by which neartermism is still worthwhile even if there’s a large chance (like 50%) that we get AGI soon—maybe you think neartermism is useful as a means to build the capacity and reputation of EA (so that it can ultimately make AI safety progress), or maybe you think that AGI is a huge problem but there’s absolutely nothing we can do about it. But these views are kinda shaky IMO.
The idea that a neartermist funder becomes convinced that world-transformative AGI is right around the corner, and then takes action by dumping all their money into fast-acting welfare enhancements, instead of trying to prepare for or influence the immense changes that will shortly occur, almost seems like parody. See for instance the concept of “ultra-neartermism”: https://forum.effectivealtruism.org/posts/LSxNfH9KbettkeHHu/ultra-near-termism-literally-an-idea-whose-time-has-come
This is not obvious to me; I’d guess that for a lot of people, AGI and global health live in separate magisteria, or they’re not working on alignment due to perceived tractability.
This could be tested with a survey.
I agree with Thomas Kwa on this
I think neartermist causes are worthwhile in their own right, but think some interventions are less exciting when (in my mind) most of the benefits are on track to come after AGI.
Fair enough. My prediction is that the idea will become more palatable over time as we get closer to AGI in the next few years. Even if there is a small chance we have the opportunity to do this, I think it could be worthwhile to think further given the amount of money earmarked for spending on neartermist causes.