I glanced at GCRI’s research you linked. I think AI is a big deal in expectation, but I’m prima facie skeptical about the value of “AI ethics.” My baseline imagination is that we get capabilities first, then figure out what to do with AI. I’m substantially more optimistic about our ability to make good decisions after we have strong AI, and I think the moral importance of the time after we get strong AI dominates the time before (in expectation). Of course, GCRI isn’t the only institution to do AI ethics work, so I might be missing something — what’s the basic case for doing AI ethics now? (Feel free to refer me to something already written rather than writing a reply yourself; there may be good existing writeups.)
Thanks for the question. This is a good thing to think critically about. With respect to strong AI, the short answer is that it’s important to develop these sorts of ideas in advance. If we wait until we already have the technology, it could be too late. There are some scenarios in which waiting is more viable, such as the idea of a long reflection, but this is only a portion of the total scenario space, and even then, the outcomes could depend on the initial setup. Additionally, ethics can also matter for near-term / weak AI, including in ways that affect global catastrophic risk, such as in the context of environmental or military affairs.
I glanced at GCRI’s research you linked. I think AI is a big deal in expectation, but I’m prima facie skeptical about the value of “AI ethics.” My baseline imagination is that we get capabilities first, then figure out what to do with AI. I’m substantially more optimistic about our ability to make good decisions after we have strong AI, and I think the moral importance of the time after we get strong AI dominates the time before (in expectation). Of course, GCRI isn’t the only institution to do AI ethics work, so I might be missing something — what’s the basic case for doing AI ethics now? (Feel free to refer me to something already written rather than writing a reply yourself; there may be good existing writeups.)
Thanks for the question. This is a good thing to think critically about. With respect to strong AI, the short answer is that it’s important to develop these sorts of ideas in advance. If we wait until we already have the technology, it could be too late. There are some scenarios in which waiting is more viable, such as the idea of a long reflection, but this is only a portion of the total scenario space, and even then, the outcomes could depend on the initial setup. Additionally, ethics can also matter for near-term / weak AI, including in ways that affect global catastrophic risk, such as in the context of environmental or military affairs.