You could of course ask this question the other way round. What is the probability that things that are good for the long run future (for P(utopia)) are good in the short run as well?
For this I would put a very high probability as:
Most of what I have read generally about how to effect the long run future suggest you need to have feedback loops to show things are working which suggests short run improvements (e.g. you want your AI interpretability work etc to help in real world cases today)
Many but not all of the examples I know of people doing things that are good for the long-run are also good for both (e.g. value spreading, pandemic preparedness)
Some good things wont last to effect the future unless they are useful (for someone) today (e.g. improved institutions)
You could of course ask this question the other way round. What is the probability that things that are good for the long run future (for P(utopia)) are good in the short run as well?
For this I would put a very high probability as:
Most of what I have read generally about how to effect the long run future suggest you need to have feedback loops to show things are working which suggests short run improvements (e.g. you want your AI interpretability work etc to help in real world cases today)
Many but not all of the examples I know of people doing things that are good for the long-run are also good for both (e.g. value spreading, pandemic preparedness)
Some good things wont last to effect the future unless they are useful (for someone) today (e.g. improved institutions)