One of the key issues with “making the future go well” interventions is that we start to run up against the reality that what is a desirable outcome for the future is so variable between different humans that the concept of making the future go well requires buying into ethical assumptions that people won’t share, meaning that it’s much less valid as any sort of absolute metric to coordinate around:
(A quote from Steven Byrnes here):
When people make statements that implicitly treat “the value of the future” as being well-defined, e.g. statements like “I define ‘strong utopia’ as: at least 95% of the future’s potential value is realized”, I’m concerned that these statements are less meaningful than they sound.
This level of variability is less for preventing bad outcomes, especially outcomes in which we don’t die (though there is still variability here) because of instrumental convergence, and while there are moral views where dying/suffering isn’t so bad, these moral views aren’t held by many human beings (in part due to selection effects), so there’s less of a chance to have conflict with other agents.
The other reason is humans mostly value the same scarce instrumental goods, but in a world where AI goes well, basically everything but status/identity becomes abundant, and this surfaces up the latent moral disagreements way more than our current world.
The main reason I voted for Forethought and MATS was because I believe AI governance/safety is both unusually important, with only Farmed/Wild animal welfare being competitive in terms of EV, and I believe that AI has a reasonable chance to be so powerful as to make other cause area assumptions irrelevant, meaning their impact is much, much less predictable without considering AI governance/safety.