It seems like “what can we actually do to make the future better (if we have a future)?” is a question that keeps on coming up for people in the debate week.
I’ve thought about some things related to this, and thought it might be worth pulling some of those threads together (with apologies for leaving it kind of abstract). Roughly speaking, I think that:
Averting AI takeover and averting human takeover are both ways to avoid the bad process thing (although of course it’s possible to have a takeover still lead to a good process)
But note that these tools are also very useful for avoiding falling into extinction or other bad trajectories, so this activity doesn’t cleanly fall out on either side of the “make the future better” vs “make there be a future” debate
There are some other activities which might help make the future better without doing so much to increase the chance of having a future, e.g.:
Try to propagate “good” values (I first wrote “enlightenment” instead of “good”, since I think the truth-seeking element is especially important for ending up somewhere good; but others may differ), to make it more likely that they’re well-represented in whatever entities end up steering
Work to anticipate and reduce the risk of worst-case futures (e.g. by cutting off the types of process that might lead there)
However, these activities don’t (to me) seem as high leverage for improving the future as the more mixed-purpose activities.
It seems like “what can we actually do to make the future better (if we have a future)?” is a question that keeps on coming up for people in the debate week.
I’ve thought about some things related to this, and thought it might be worth pulling some of those threads together (with apologies for leaving it kind of abstract). Roughly speaking, I think that:
~Optimal futures flow from having a good reflective process steering things
It’s sort of a race to have a good process steering before a bad process
Averting AI takeover and averting human takeover are both ways to avoid the bad process thing (although of course it’s possible to have a takeover still lead to a good process)
We’re going to need higher powered epistemic+coordination tech to build the good process
But note that these tools are also very useful for avoiding falling into extinction or other bad trajectories, so this activity doesn’t cleanly fall out on either side of the “make the future better” vs “make there be a future” debate
There are some other activities which might help make the future better without doing so much to increase the chance of having a future, e.g.:
Try to propagate “good” values (I first wrote “enlightenment” instead of “good”, since I think the truth-seeking element is especially important for ending up somewhere good; but others may differ), to make it more likely that they’re well-represented in whatever entities end up steering
Work to anticipate and reduce the risk of worst-case futures (e.g. by cutting off the types of process that might lead there)
However, these activities don’t (to me) seem as high leverage for improving the future as the more mixed-purpose activities.