Suppose that we did a sortition with 100 English speaking people (uniformly selected over people who speak English and are literate for simplicity). We task this sortition with determining what tradeoff to make between risk of (violent) disempowerment and accelerating AI and also with figuring whether globally accelerating AI is good. Suppose this sortition operates for several months and talks to many relevant experts (and reads applicable books etc). What conclusion do you think this sortition would come to?
My intuitive response is to reject the premise that such a process would accurately tell you much about people’s preferences. Evaluating large-scale policy tradeoffs typically requires people to engage with highly complex epistemic questions and tricky normative issues. The way people think about epistemic and impersonal normative issues generally differs strongly from how they think about their personal preferences about their own lives. As a result, I expect that this sortition exercise would primarily address a different question than the one I’m most interested in.
Furthermore, several months of study is not nearly enough time for most people to become sufficiently informed on issues of this complexity. There’s a reason why we should trust people with PhDs when designing, say, vaccine policies, rather than handing over the wheel to people who have spent only a few months reading about vaccines online.
Putting this critique of the thought experiment aside for the moment, my best guess is that the sortition group would conclude that AI development should continue roughly at its current rate, though probably slightly slower and with additional regulations, especially to address conventional concerns like job loss, harm to children, and similar issues. A significant minority would likely strongly advocate that we need to ensure we stay ahead of China.
My prediction here draws mainly on the fact that this is currently the stance favored by most policy-makers, academics, and other experts who have examined the topic. I’d expect a randomly selected group of citizens to largely defer to expert opinion rather than take an entirely different position. I do not expect this group to reach qualitatively the same conclusion as mainstream EAs or rationalists, as that community comprises a relatively small share of the total number of people who have thought about AI.
I doubt the outcome of such an exercise would meaningfully change my mind on this issue, even if they came to the conclusion that we should pause AI, though it depends on the details of how the exercise is performed.
The current results show that I’m the most favorable to accelerating AI out of everyone who voted so far. I voted for “no regulations, no subsidy” and “Ok to be a capabilities employee at a less safe lab”.
However, I should clarify that I only support laissez faire policy for AI development as a temporary state of affairs, rather than a permanent policy recommendation. This is because the overall impact and risks of existing AI systems are comparable to, or less than, that of technologies like smartphones, which I also favor remaining basically unregulated. But I expect future AI capabilities will be greater.
After AI agents get significantly better, my favored proposals to manage AI risks are to implement liability regimes (perhaps modeled after Gabriel Weil’s proposals) and to grant AIs economic rights (such as a right to own property, enter contracts, make tort claims, etc.). Other than these proposals, I don’t see any obvious policies that I’d support that would slow down AI development—and in practice, I’m already worried these policies would go too far in constraining AI’s potential.