I very much agree with this and have been struggling with a similar problem in terms of achieving high value futures, versus mediocre ones.
I think there may be some sort of a “Fragile Future Value Hypothesis,” somewhat related to Will MacAskill’s “No Easy Eutopia,” (and the essay which follows this one in the series) and somewhat isomorphic to “The Vulnerable World Hypothesis,” in which there may be many path dependencies, potentially leading to many low and medium value futures attractor states we could end up in, because, in expectation, we are somewhat clueless as to which crucial considerations matter, and if we act wrongly on any of those crucial considerations, we could potentially lose most or even nearly all future value.
I also agree that making the decisionmakers working on AI highly aware of this could be an important solution, I’ve been thinking that the problem isn’t so much that people at the labs don’t care about future value, they are often quite explicitly utopian, it just seems to me that they don’t have much awareness of the fact that near-best futures might actually be highly contingent and very difficult to achieve, and the illegibility of this fact means that they are not really trying to be careful about which path they set us on.
I also agree that trying to get advanced AI working on these types of issues as soon it is able to meaningfully assist could be an important solution and intend to start working on this as one of my main objectives— although I’ve been a bit more focused on macrostrategy than philosophy because I think this might be a bit more feasible for current or near-future AI, and if we get in the right strategic position then maybe that could position us to figure out the philosophy stuff which I think is going to be a lot harder for AI.
I very much agree with this and have been struggling with a similar problem in terms of achieving high value futures, versus mediocre ones.
I think there may be some sort of a “Fragile Future Value Hypothesis,” somewhat related to Will MacAskill’s “No Easy Eutopia,” (and the essay which follows this one in the series) and somewhat isomorphic to “The Vulnerable World Hypothesis,” in which there may be many path dependencies, potentially leading to many low and medium value futures attractor states we could end up in, because, in expectation, we are somewhat clueless as to which crucial considerations matter, and if we act wrongly on any of those crucial considerations, we could potentially lose most or even nearly all future value.
I also agree that making the decisionmakers working on AI highly aware of this could be an important solution, I’ve been thinking that the problem isn’t so much that people at the labs don’t care about future value, they are often quite explicitly utopian, it just seems to me that they don’t have much awareness of the fact that near-best futures might actually be highly contingent and very difficult to achieve, and the illegibility of this fact means that they are not really trying to be careful about which path they set us on.
I also agree that trying to get advanced AI working on these types of issues as soon it is able to meaningfully assist could be an important solution and intend to start working on this as one of my main objectives— although I’ve been a bit more focused on macrostrategy than philosophy because I think this might be a bit more feasible for current or near-future AI, and if we get in the right strategic position then maybe that could position us to figure out the philosophy stuff which I think is going to be a lot harder for AI.