And so I do want to make it clear that insofar as I’ve expressed, let’s say, some degree of ambivalence about how much we ought to be prioritising AI safety and AI governance today, my sort of implicit reference point here is to things like pandemic preparedness, or nuclear war or climate change, just sort of the best bets that we have for having a long run social impact.
I was wondering what you think of the potential of broader attempts to influence the long-run future (e.g. promoting positive values, growing the EA movement) as opposed to the more targeted attempts to reduce x-risks that are most prominent in the EA movement.
In brief, I feel positively about these broader attempts!
It seems like some of these broad efforts could be useful, instrumentally, for reducing a number of different risks (by building up the pool of available talent, building connections, etc.) The more unsure about what risks matter most, as well, the more valuable broad capacity-building efforts are.
It’s also possible that some shifts in values, institutions, or ideas could actually be long-lasting. (This is something that Will MacAskill, for example, is currently interested in.) If this is right, then I think it’s at least conceivable that trying to positively influence future values/institutions/ideas is more important than reducing the risk of global catastrophes: the goodness of different possible futures might vary greatly.
In the episode you say:
I was wondering what you think of the potential of broader attempts to influence the long-run future (e.g. promoting positive values, growing the EA movement) as opposed to the more targeted attempts to reduce x-risks that are most prominent in the EA movement.
In brief, I feel positively about these broader attempts!
It seems like some of these broad efforts could be useful, instrumentally, for reducing a number of different risks (by building up the pool of available talent, building connections, etc.) The more unsure about what risks matter most, as well, the more valuable broad capacity-building efforts are.
It’s also possible that some shifts in values, institutions, or ideas could actually be long-lasting. (This is something that Will MacAskill, for example, is currently interested in.) If this is right, then I think it’s at least conceivable that trying to positively influence future values/institutions/ideas is more important than reducing the risk of global catastrophes: the goodness of different possible futures might vary greatly.
Thanks for your reply! I also feel positively about broader attempts and am glad that these are being taken more seriously by prominent EA thinkers.