I think there are multiple ways to be a neartermist or longtermist, but “currently existing” and “next 1 year of experiences” exclude almost all effective animal advocacy we actually do and the second would have ruled out deworming.
Are you expecting yourself (or the average EA) to be able to cause greater quantities of intense pleasure than quantities of intense suffering you (or the average EA) can prevent in the next ~30 years, possibly considering AGI? Maybe large numbers of artificially sentient beings made to experience intense pleasure, or new drugs and technologies for humans?
To me, the distinction between neartermism and longtermism is primarily based on decision theory and priors. Longtermists tend to be more willing to bet more to avoid a single specific existential catastrophe (usually extinction) even if the average longtermist is extremely unlikely to avert the catastrophe. Neartermists rely on better evidence, but seem prone to ignore what they can’t measure (McNamara fallacy). It seems hard to have predictably large positive impacts past the average human lifespan other than through one-shots the average EA is very unlikely to be able to affect, or without predictably large positive effects in the nearer term, which could otherwise qualify the intervention as a good neartermist one.
I think there are multiple ways to be a neartermist or longtermist, but “currently existing” and “next 1 year of experiences” exclude almost all effective animal advocacy we actually do and the second would have ruled out deworming.
Are you expecting yourself (or the average EA) to be able to cause greater quantities of intense pleasure than quantities of intense suffering you (or the average EA) can prevent in the next ~30 years, possibly considering AGI? Maybe large numbers of artificially sentient beings made to experience intense pleasure, or new drugs and technologies for humans?
To me, the distinction between neartermism and longtermism is primarily based on decision theory and priors. Longtermists tend to be more willing to bet more to avoid a single specific existential catastrophe (usually extinction) even if the average longtermist is extremely unlikely to avert the catastrophe. Neartermists rely on better evidence, but seem prone to ignore what they can’t measure (McNamara fallacy). It seems hard to have predictably large positive impacts past the average human lifespan other than through one-shots the average EA is very unlikely to be able to affect, or without predictably large positive effects in the nearer term, which could otherwise qualify the intervention as a good neartermist one.