Better Futures Discussion Thread: With Fin Moorhouse
This week, we are highlighting Forethoughtâs Better Futures series. To make the future go better, we can either work to avoid near-term catastrophes like human extinction or improve the futures where we survive. This series from Forethought explores that second option.
Fin Moorhouse (@finm), who authored two chapters in the series (Convergence and Compromise, and No Easy Eutopia) along with @William_MacAskill, has agreed to answer a few of your questions.
You can read (and comment) on the full series on the Forum. In order, the chapters are:
Leave your questions and comments below. Note that Fin isnât committing to answer every question, and if you see someone elseâs question you can answer, youâre free to.
Given this combination of views, Iâm surprised that Will doesnât support what @Holly Elmore âžïž đž calls âPause NOWâ and instead want to see a pause later (after we have human-level AI). Iâm curious if your own views are similar or how they differ from Willâs. (My own âexpected value of the future, given survivalâ I would say is similarly pessimistic, but Iâm reluctant to put into numbers due to being very unsure how to quantify it.)
Aside from what Holly said in the linked comment, which I agree with, another argument more relevant to the current discussion is that many opportunities for making the future better seem to exist during the AI transition, including the early parts of it, so by not pausing ASAP (and currently having few resources for such interventions), weâre permanently giving up these opportunities. Conversely, by pausing NOW, we buy more time to think and strategize about how to better intervene on these opportunities, or otherwise lay the groundwork for them.
For example, during the pause, we could:
Try to solve metaphilosophy, or otherwise think about how to improve AI philosophical competence or moral epistemology.
Try to get AI companies to âthink about this issueâ (of morally uncertain AIs that are motivated by doing good de dicto).
Research ways to make such AIs safer from our (human) perspective so that thereâs less of a tradeoff between safety and Better Futures.
Spread the idea of Better Futures generally so that when AI development resumes, there will be more people aware of and working on these issues.
Such interventions could mean the difference between the first human-level AIs being competent and critical moral/âphilosophical advisors, or independent moral (and safe) agents, vs uncritically doing what humans seem to want and/âor giving bad/âincompetent/âsycophantic âadviceâ (when humans think to ask for it), which seemingly can make a big difference to how well the future goes.
What do you think about this argument, and overall about pause now vs later?
Thanks for this.
In each the examples you give, iâm thinking that the pause would be significantly more beneficial (plausibly by 10x) if we pause when AI is already capable enough that it can significantly help us solve the issue. In general, they seem like the kinds of issues where AI could massively accelerate progress.
So if iâm choosing between international pause now vs international pause in 2 years, I choose the latter. (I assume weâre talking about international pauses here rather than just the U.S. but lmk if you also support a unilateral pause now!)
I do find Hollyâs point that it might be damaging to quibble about exactly when we pause if that reduces the chance of a pause happening at all. And today we are very far from a pause actually happening, and one may well be needed in two yearsâ time, so I def support efforts to get us closer to a pause!
Iâm hesitant about saying âpause nowâ because I actually think a different policy might be much more effective. But I think a world where we were about to do an international pause would be better than the actual world.
(I want to think more about this topic and all of this is v tentative.)
Hi Fin,
I have a lot of questions so I figure I would just share all of them and you could respond to the ones you want to.
I think Forethought is a super cool institution. What advice would have for someone who wanted to work there as a researcher? Do you think itâs important to have a strong understanding of how LLMs work?
I made this post where I categorized flourishing cause areas based on âHow To Make The Future Better.â I thought Iâd share. Iâm curious if this categorization generally aligns with how you think about the problem.
Locking-in oneâs values
Ensuring the future is aligned with the correct values
Working towards viatopia
Promoting futures with more moral reflection
Improving the ability for people with different views to get their desired futures
Ensuring future people are able to create a good future
Keeping humanityâs options open
Improving global stability
Improving future humanâs decision making
Empowering responsible actors
Speeding up progress
I made this post which is an overview of longtermismâs ideas, writings, individuals, institutions, and history. I thought Iâd share since you made the longtermism website.
The Better Futures series assumes that the future will be net-positive by default. To me, the ideas presented in the series (strong self-modification, modification of descendants, selection of beliefs by evolutionary pressures) indicate that we should expect future humans to be very different from us, and that, as a result, we should expect the future to be neutral in expectation. Do you agree with this logic or do you think the future will be net-positive by default? Additionally, why?
Currently, there are a wide range of ideas about how a post-AGI future will go and what features it will contain. To me, this strongly indicates that we should expect the post-AGI future could go in a very broad range of ways and that we should prepare for the many different ways it could go. At the same time, I get the sense that Forethought has a very specific vision about how a post-AGI future will go (there will be an intelligence explosion, tools for epistemics will be beneficial, we might begin acquiring resources in other solar systems, small sets of actors could use AGI in malicious ways.) Iâm wondering how do you decide what ideas you think are likely, and do you guys have any measures in place to ensure youâre receiving criticism of your ideas so you donât create an epistemic bubble?
I understand that you have done some work related to space governance. A criticism I have of working on this field is that (1) it seems like it has been very intractable due to the lack of space treaties (2) if any great power has a decisive advantage, global treaties wonât matter (3) even if you are able to get a law or treaty passed, corporate or state interests could easily override these laws later on (4) thereâs probably a low chance of success of even getting into a position where you could influence this stuff. As such, Iâm wondering, if you think itâs valuable for additional people to work in the field, why do you think this?
It seems like longtermism is an unhelpful idea since it requires people to believe that our actions could persist for millions of years. I personally am pretty skeptical of this, although I do think it is possible. It also seems like the idea has been somewhat harmful to EA as a movement since people can always point out that some of the founders of the movement are focused on helping people millions of years from now, which sounds pretty crazy. Iâm wondering if you agree with this assessment.
In âHow To Make The Future Better,â MacAskill argues that we should make AIs encourage humans to be good people and use them as a source of moral reflection. This seems like it could be deeply problematic in case moral sense theory is true, but AIs lack a moral sense. Do you agree with this?
Thanks James!
Some things I appreciate in my colleagues: having some discernment for which questions or ideas are most important, rather than just conceptually interesting but not urgent; being able to contribute to group conversations by driving at cruxes, being willing to ask naive questions and avoiding the impulse to sound clever for the sake of it, and being able to spot and entertain âbig if trueâ hypotheses; and being able to clearly communicate ideas where you often donât have an especially deep literature to draw on.
I think itâs important to understand how AI works in the fundamentals; including some of the theory. I donât think itâs important to have deep technical knowledge of LLMs, unless you can think why those details could end up being relevant for macrostrategy.
On your second question, many of those points seem good to me. Iâll single out âLocking-in oneâs valuesâ since Iâve been thinking about it recently. It seems to me that some people roughly think that great futures are futures which resemble our own (or which carry on our values) in many particular ways. In particular, maybe great futures are futures which are recognisably human in their values. Inhuman futures, like futures where AI successors call the shots, might just seem empty of what we today care about; even if they involve a lot of moral reflection and nothing morally offensive from a human perspective. We could call this a âhumanity foreverâ view.
On the other hand, some people roughly think that great futures are necessarily futures which are radically different from humanity today, including in the values which guide it, and perhaps the kind of actors living there. See Dan Faggella on the âWorthy Successorâ idea (and here), which I see as one version of this view.
Both these views care about preventing obvious catastrophes from AGI, but it seems to me like they might end up disagreeing quite profoundly on what should come next. Itâs possible that there is opportunity for trade and compromise between the two views, but in any case this strikes me as a potentially important difference in approach to post-AGI futures.
Firstly, youâre right that the series doesnât discuss negative futures, but I should say thatâs not because Will or I think they are worth ignoring, or very unlikely in absolute terms. We didnât discuss them more just so we could make a more focused argument about how to think about making good futures even better.
I think your point (quoted) touches on the difference I mentioned above between âhumanity foreverâ views and views which are more open to change in values. I think itâs coherent to take a view such as:
You want to value whatever is ultimately valuable. Youâre unsure what that is, but you trust the processes which guide the future to converge on it;
You want to value whatever you would value under some idealised process of reflection, and you think the processes which guide the future will emulate idealised reflection on your own values closely enough;
You value roughly what you currently value. But youâre scope-insensitive: in order to think weâve reached a great future, you just need your neck of the woods to be how you want it, and the rest of the future to avoid things you think are morally repugnant. You expect almost the entire future not to be guided by what you value; but youâre confident you can get the things you want to be satisfied the future is great, and youâre confident the rest of the future will avoid the morally repugnant (perhaps through trade)
Similar to above, but what you personally value is cheap by the lights of other value systems which guide the future, and vice-versa. So you are confident you can secure a great future by your lights through trade.
Better Futures argues that these views may be less tenable than they first appear, but I think theyâre not totally doomed.
Additionally I would point out a potential âmissing moodâ in the framing we adopt of cardinally quantifying the value of the future in terms of a fraction of the value of the best feasible future. This suggests futures which are only, say, 10E-5 the value of the best feasible future are barren, hollow, âneutralâ. But this would be a mistake: potentially our own world, even with all the harm and pain removed, is achieving a tiny fraction of what a great future could achieve. So we might imagine (as Better Futures points out) a âcommon-sense eutopiaâ which is radically better than the world today, but still only a fraction as good as things could get. That could be true, but it doesnât undermine the value of such a future, which would also truly be (by stipulation) wildly better than the world today! All the joy and freedom and discovery and so on, in this near-zero world, would be entirely real and could dwarf all the good we have achieved and enjoyed so far.
Maybe Iâm misreading but I donât think it follows from uncertainty about how things go that many different things will actually happen. For example, if youâre uncertain who wins a political election, you donât infer that everyone wins and shares power.
Iâm in a few minds about this, so Iâll just list some reactions:
You say the Forethought vision is âvery specificâ, and then you list some claims (e.g. âsmall sets of actors could use AGI in malicious waysâ) which seem⊠surprisingly anodyne? Or in particular it doesnât strike me as egregious or unusual to put a decent amount of credence in those claims being true. I think thatâs all you need to take them seriously and work on them. Indeed I donât myself feel extremely confident in any one of them.
I think there is a way to do criticism in a performative way, where you invite people you know to disagree, for reasons you are already familiar with. I donât think that is totally useless, because performing these dialogues in public can be useful for other people to decide what they think.
On the other hand, I think the best kind of outside criticism for the sake of throwing out bad ideas often isnât very flashy, and can look like outside experts telling you âthis isnât really how [my domain of expertise works], so [ABC] seems confused but [XYZ] seems plausibleâ.
From my perspective there is quite a lot of internal disagreement, including between broad worldviews, although thatâs relative.
Speaking personally, I worry a bit that there are components of the implicit shared Forethought worldview which are tricky to pin down from the outside, and thus more likely to influence research decisions in an unscrutinised way. I do think this is a generic problem, and think the most useful place from which to notice and communicate these implicit beliefs is straddling being enough of an insider to have context, and enough of an outsider to see alternatives.
On the other hand I think you do at some point just need to pick some assumptions and some worldview and work within it to make any progress at all. In my experience simply pointing out that those assumptions could be wrong is often less valuable than proposing more fleshed-out alternative assumptions and worldviews, which themselves can be criticised and so onâŠ
We are at Forethought, running a research programme on space right now, which I guess reflects a view that it does seem worth investigating more. I donât think the central case for space runs through the hope for binding international treaties because I agree that we shouldnât expect them to hold. I think there are a few other reasons to want to investigate space. One is that the space economy could be somewhat relevant for the course of AGI development, for example if orbital data centres are a big deal, or because of the role of sensing satellites in peace and security.
Another is that most of the physical stuff is in space. At some point it seems likely to me, if the human project continues at all, that most of the important stuff will also eventually be in space. AGI + automated manufacturing + rapid R&D progress suggests that expanding into space could happen in the time span of decades rather than centuries or millennia; and that seems generically worth planning for. And it seems like there are some policy levers which donât root through international treaties.
To be clear I donât currently think that space governance should be the next big cause in EA or anything like that.
This feels like a slightly odd sentence construction, because you seem to be saying that longtermism is unhelpful because it requires people to believe one of its central claims. I agree itâs contentious and Iâm certainly not confident that the effects of our actions could persist for millions of years but it seems plausible enough that the anticipated long-term effects of our actions should meaningfully weigh into what we prioritise, at least where you can tell a story about how your decisions could have some systematic long-run effects.
I do think that is plausible. Although, to state the obvious, there is a difference between which ideas have good or bad PR effects when you say them out loud, and which ideas are actually true or important. So questions about communicating longtermist ideas are, naturally, different from the question of whether longtermist ideas are worth taking seriously as ideas.
And then, I also want to say: the full-on version of longtermism â that the very long-run effects of our actions are overwhelmingly important for what we prioritise â just doesnât feel especially necessary for working on most or even all of the topics that Forethought is focused on. There is a far more common-sense and mundane reason to focus on them, which is that they could matter enormously within our own lifetimes! Another way of putting that is that when trying to prioritise between possible focuses within Forethought, my personal view is that longtermism is rarely a crux. Maybe my colleagues disagree with that; obviously Iâm not speaking on their behalf.
Iâm not sure Iâm entirely following your points but I donât see a strong reason why AIs or non-human entities could not in principle engage in genuine moral reasoning in the same way that humans do. Maybe instead the AIs will do something which superficially resembles real moral reasoning, but which is closer to just telling humans what they want to hear.
I do think that is not a crazy thing to worry about because it is much easier to train some skill where an uncontroversial and abundant source of ground truth data exists. Moral reasoning is not one of those domains because people often donât agree on what good moral reasoning looks like. So I think there is much work to be done on that front although Iâm not sure that answers your question.
Thanks again for your questions!
Hey Fin,
Thanks for so thoughtfully answering my questions!
Hi Fin, sorry Iâm a bit late with my question, I was rereading parts of the Better Futures series. First of all, I have to say itâs one of my favorite article series Iâve ever read, and Iâll be citing it in my own work going forward. The easygoing-versus-fussy distinction in particular is something Iâm finding really interesting to dig into. :) Would love to discuss it in more detail at some point.
I wanted to push on the metaphor of sailing to an island, which appears at the start of No Easy Eutopia, but my question is going to take some preamble explanation (sorry!).
I find myself preferring a slightly different picture. Rather than thinking of eutopia as an island weâre navigating to, I tend to think of society as the ship itself, drifting through a sea of value over time (a topography of better and worse regions weâre already moving through). Societal change feels to me more like a search through uncharted moral territories than an expedition to a specific destination. On that picture, the priority seems more likely to be âhow do we improve the ship, so that society reliably moves toward better regions of the sea?â
A couple of clarifications. First, I grant fussiness, I agree most plausible axiologies locate near-best futures in a very narrow region (I lean towards total hedonistic utilitarianism, myself). Second, Iâm not a a quietist, in my own work Iâm defending what I call moral niche construction, a fairly interventionist view on which we should actively reshape institutions, technologies, and even our own moral psychology (through things like AI moral decisionmakers or bioenhancement) to push society toward better regions. So the disagreement isnât really about ambition, either.
Where I want to press is the following. In the ship-improvement picture, I can grant openly that we probably will never reach eutopia. We end up in a high-value region of the sea (in a local optima), much better than where we are now, plausibly very good in absolute terms, but not the narrow island.
That sounds like a concession, but on rereading Convergence and Compromise, it looks to me like the target-pursuit picture probably doesnât reach the island either: you mention how WAM-convergence is unlikely, partial convergence plus trade faces serious obstacles, value-destroying threats can eat most of the value⊠So the comparison isnât âguaranteed eutopia versus probably-not-eutopiaâ, since you yourself seem pretty pessimistic. So itâs two orientations that both probably miss the island, where one delivers reliable improvements to our current region of the sea along the way, and the other keeps optimizing toward a target it probably wonât hit. And, well, if you miss the moon, you donât really land upon the stars⊠you drift in empty space and die, haha.
(There are similar points on Jerry Gausâ The Tyranny of the Ideal, and on recent debates between ideal theory and non-ideal theory in moral and political philosophy)
So, finally, my question is: given that target-pursuit probably doesnât reach eutopia either, on the seriesâ own analysis, why is the practical orientation toward the narrow target rather than toward improving our current region of the sea (e.g. pursuing very high + plausibly easy to reach and resilient local optima)? Whatâs the case for target-pursuit as a practical orientation, once we factor in that we will probably fail? Is it a case akin to fanaticism, where, if we land in the island, the payoff would be huge?
(Apologies in advance if this is addressed somewhere in the series, my memory context window isnât large enough to hold the whole essay series at once!)
Forethoughtâs view that improving the future conditional on survival is more important than ensuring survival goes against the dominant view in EA for many years that we need to reduce extinction risk. Two questions on this:
How far away from the optimal allocation of (longtermist) resources do you think the community currently is?
For example, should we be radically reducing investment in things like addressing biorisk or nuclear risk? Do we need to be rethinking the allocation of resources within AI risk?
Do you think there is anything that is being prioritized in the community that is actually harmful?
For example, could certain AI alignment approaches be bad for future digital sentience?
I really liked the series :)