Error
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
Unrecognized LW server error:
Field "fmCrosspost" of type "CrosspostOutput" must have a selection of subfields. Did you mean "fmCrosspost { ... }"?
I am wondering if you could say something about how the political developments in the US (i.e., Trump 2.0) are affecting your thinking on AGI race dynamics? It seems like the default assumption communicated publicly is still that the US are “the good guys” and a “western liberal democracy” that can be counted on, when the actual actions on the world stage are casting at least some doubt on this position. In some sense, one could even argue that we are already playing out a high-stakes alignment crisis at this very moment.
Any reactions or comments on this issue? I understand that openness around this topic is difficult at the moment but I also don’t think that complete silence is all that wise either.
Congrats on launching the org. Would developing plans to avoid gradual disempowerment be in scope for your research?
Thanks! Yes, definitely in scope. There was a lot of discussion of this paper when it came out, and we had Raymond Douglas speak at a seminar.
Opinions vary within the team on how valuable it is to work on this; I believe Fin and Tom are pretty worried about this sort of scenario (I don’t know about others).. I feel a bit less convinced on the value of working on it (relative to other things), and I’ll just say why briefly:
- I feel less convinced that people wouldn’t foresee the bad gradual disempowerment scenarios and act to stop them from happening, esp with advanced AI assistance
- In the cases that feel more likely, I feel less convinced that gradual disempowerment is particularly bad (rather than just “alien”).
- Insofar as there are bad outcomes here, it seems particularly hard to steer the course of history away from them.
The biggest upshot I see is that, the more you buy these sorts of scenarios, the more it increases the value of AGI being developed by a single e.g. multilateral project rather than being developed by multiply companies and countries. That’s something I’m really unsure about, so reasoning around this could easily switch my views.
quick thougths RE your reasons for working on it or not:
1) It seems like many people are not seeing them coming (e.g. AI safety community seems surprisingly unreceptive and to have made many predictable mistakes by ignoring structural causes of risk, e.g. being overly optimistic about companies prioritizing safety over competitiveness)
1) It seems like seeing them coming is predictably insufficient to stopping them happening, because they are the result of social dilemmas.
1) The structure of the argument appears to be the (fallacious): “if it is a real problem, other people will address it, so we don’t need to” (cf https://www.explainxkcd.com/wiki/index.php/2278:_Scientific_Briefing)
2) Interesting. Seems potentially cruxy.
3) I guess we might agree here… combined with (1), I guess your argument is: “won’t be neglected (1) and is not tractable (3)”, whereas I might say: “currently neglected, could require a lot of work to become tractable, seems important enough to warrant that effort”
The main upshots I see are:
- higher P(doom) due to stories that are easier for many people to swallow --> greater ability and potential for public awareness and political will if messaging includes this.
- more attention needed to questions of social organization post-AGI.
Exciting! Am I right in understanding that Forethought Foundation for Global Priorities Research is no longer operational?
Hi Rockwell!
Yes, in most relevant senses that’s correct. We’re a new team, we think of ourselves as a new project, and Forethought Foundation’s past activities (e.g. its Fellowship programs) and public presence have been wound down. We do have continuity with Forethought Foundation in some ways, mainly legal/administrative.
“OpenPhil’s Worldview Investigations team” refers I think to Rethink Priorities’, or another one at Open Philanthrophy? Thanks!
We meant the Open Philanthropy one: apparently it’s been merged into their GCR Cause Prio research team, but it was where Joe Carlsmith, Tom Davidson, Lukas Finnveden, and others wrote a bunch of foundational reports on AI timelines etc.
Interesting, thanks, will try to find more info!