Hi Brian! One of the projects I’m thinking of doing is basically a series of posts explaining why I think EY is right on this issue (mildly superhuman AGI would be consequentialist in the relevant sense, could take over the world, go FOOM, etc.) Do you think this would be valuable? Would it change your priorities and decisions much if you changed your mind on this issue?
Cool. :) This topic isn’t my specialty, so I wouldn’t want you to take the time just for me, but I imagine many people might find those arguments interesting. I’d be most likely to change my mind on the consequentialist issue because I currently don’t know much about that topic (other than in the case of reinforcement-learning agents, where it seems more clear how they’re consequentialist).
Regarding FOOM, given how much progress DeepMind/OpenAI/etc have been making in recent years with a relatively small number of researchers (although relying on background research and computing infrastructure provided by a much larger set of people), it makes sense to me that once AGIs are able to start contributing to AGI research, things could accelerate, especially if there’s enough hardware to copy the AGIs many times over. I think the main thing I would add is that by that point, I expect it to be pretty obvious to natsec people (and maybe the general public) that shit is about to hit the fan, so that other countries/entities won’t sit idly by and let one group go FOOM unopposed. Other countries could make military, even nuclear, threats if need be.
In general, I expect the future to be a bumpy ride, and AGI alignment looks very challenging, but I also feel like a nontrivial fraction of the world’s elite brainpower will be focused on these issues as things get more and more serious, which may reduce our expectations of how much any given person can contribute to changing how the future unfolds.
Hi Brian! One of the projects I’m thinking of doing is basically a series of posts explaining why I think EY is right on this issue (mildly superhuman AGI would be consequentialist in the relevant sense, could take over the world, go FOOM, etc.) Do you think this would be valuable? Would it change your priorities and decisions much if you changed your mind on this issue?
Cool. :) This topic isn’t my specialty, so I wouldn’t want you to take the time just for me, but I imagine many people might find those arguments interesting. I’d be most likely to change my mind on the consequentialist issue because I currently don’t know much about that topic (other than in the case of reinforcement-learning agents, where it seems more clear how they’re consequentialist).
Regarding FOOM, given how much progress DeepMind/OpenAI/etc have been making in recent years with a relatively small number of researchers (although relying on background research and computing infrastructure provided by a much larger set of people), it makes sense to me that once AGIs are able to start contributing to AGI research, things could accelerate, especially if there’s enough hardware to copy the AGIs many times over. I think the main thing I would add is that by that point, I expect it to be pretty obvious to natsec people (and maybe the general public) that shit is about to hit the fan, so that other countries/entities won’t sit idly by and let one group go FOOM unopposed. Other countries could make military, even nuclear, threats if need be.
In general, I expect the future to be a bumpy ride, and AGI alignment looks very challenging, but I also feel like a nontrivial fraction of the world’s elite brainpower will be focused on these issues as things get more and more serious, which may reduce our expectations of how much any given person can contribute to changing how the future unfolds.