True; empirically there is a lot of crossover in ‘which risks and causes we should care about funding’. In the other direction, pandemic prevention seems to serve both masters.
But, for clarification, I think
the reason the “longtermists working on AI risk” care about the total doom in 15 years is because it could cause extinction preclude the possibility of a trillion-happy-sentient-beings in the long term. Not because it will be bad for people alive today.
“deworming or charter cities are seeking payoffs that only get realized on a 20-50 year time horizon” … that is only long-term in common parlance right? It’s not long-term for EAs. LT-ists would general not prioritize this.
the reason the “longtermists working on AI risk” care about the total doom in 15 years is because it could cause extinction preclude the possibility of a trillion-happy-sentient-beings in the long term. Not because it will be bad for people alive today.
As a personal example, I work on AI risk and care a lot about harm to people alive today! I can’t speak for the rest of the field, but I think the argument for working on AI risk goes through if you just care about people alive today and hold beliefs which are common in the field
True; empirically there is a lot of crossover in ‘which risks and causes we should care about funding’. In the other direction, pandemic prevention seems to serve both masters.
But, for clarification, I think
the reason the “longtermists working on AI risk” care about the total doom in 15 years is because it could cause extinction preclude the possibility of a trillion-happy-sentient-beings in the long term. Not because it will be bad for people alive today.
“deworming or charter cities are seeking payoffs that only get realized on a 20-50 year time horizon” … that is only long-term in common parlance right? It’s not long-term for EAs. LT-ists would general not prioritize this.
As a personal example, I work on AI risk and care a lot about harm to people alive today! I can’t speak for the rest of the field, but I think the argument for working on AI risk goes through if you just care about people alive today and hold beliefs which are common in the field
- see this post I wrote on the topic, and a post by Scott Alexander on the same theme.
Everyone dying in 15 years certainly sounds like it would be bad for people alive today!
But yeah it’s more about the stakes (and duration of stakes) rather than the “amount of time to effect”