Thanks! Yes, definitely in scope. There was a lot of discussion of this paper when it came out, and we had Raymond Douglas speak at a seminar.
Opinions vary within the team on how valuable it is to work on this; I believe Fin and Tom are pretty worried about this sort of scenario (I don’t know about others).. I feel a bit less convinced on the value of working on it (relative to other things), and I’ll just say why briefly: - I feel less convinced that people wouldn’t foresee the bad gradual disempowerment scenarios and act to stop them from happening, esp with advanced AI assistance - In the cases that feel more likely, I feel less convinced that gradual disempowerment is particularly bad (rather than just “alien”). - Insofar as there are bad outcomes here, it seems particularly hard to steer the course of history away from them.
The biggest upshot I see is that, the more you buy these sorts of scenarios, the more it increases the value of AGI being developed by a single e.g. multilateral project rather than being developed by multiply companies and countries. That’s something I’m really unsure about, so reasoning around this could easily switch my views.
Thanks! Yes, definitely in scope. There was a lot of discussion of this paper when it came out, and we had Raymond Douglas speak at a seminar.
Opinions vary within the team on how valuable it is to work on this; I believe Fin and Tom are pretty worried about this sort of scenario (I don’t know about others).. I feel a bit less convinced on the value of working on it (relative to other things), and I’ll just say why briefly:
- I feel less convinced that people wouldn’t foresee the bad gradual disempowerment scenarios and act to stop them from happening, esp with advanced AI assistance
- In the cases that feel more likely, I feel less convinced that gradual disempowerment is particularly bad (rather than just “alien”).
- Insofar as there are bad outcomes here, it seems particularly hard to steer the course of history away from them.
The biggest upshot I see is that, the more you buy these sorts of scenarios, the more it increases the value of AGI being developed by a single e.g. multilateral project rather than being developed by multiply companies and countries. That’s something I’m really unsure about, so reasoning around this could easily switch my views.