This was really well written! I appreciate the concise and to the point writing style, as well as a summary at the top.
Regarding the arguments, I think they make sense to me.
Although this is where the whole discussion of longtermism does tend to stay pretty abstract, if we can’t actually put real numbers on it.
For ex, in the spirit of your example—does working on AI safety at MIRI prevent extinction, while assuming a sufficiently great future compared to, say, working on AI capabilities at OpenAI? (That is, maybe a misaligned AI can cause a greater future?)
I don’t think it’s actually possible to do a real calculations in this case, and so we make the (reasonable) base assumption that a future with alligned AI is better than a future with a misaligned AI, and go from there.
Maybe I am overly biased against longtermism either way, but in this example it seems to me like the problem you mention isnt really a real-world worry, but only really a theoretically possible pascals mugging.
Having said that I still think it is a good argument against strong longtermism
This was really well written! I appreciate the concise and to the point writing style, as well as a summary at the top.
Regarding the arguments, I think they make sense to me. Although this is where the whole discussion of longtermism does tend to stay pretty abstract, if we can’t actually put real numbers on it.
For ex, in the spirit of your example—does working on AI safety at MIRI prevent extinction, while assuming a sufficiently great future compared to, say, working on AI capabilities at OpenAI? (That is, maybe a misaligned AI can cause a greater future?)
I don’t think it’s actually possible to do a real calculations in this case, and so we make the (reasonable) base assumption that a future with alligned AI is better than a future with a misaligned AI, and go from there.
Maybe I am overly biased against longtermism either way, but in this example it seems to me like the problem you mention isnt really a real-world worry, but only really a theoretically possible pascals mugging.
Having said that I still think it is a good argument against strong longtermism