When I wrote this post, pauseAI and similar were much less of a thing
I agree, this seems broadly accurate. I suppose I should have clarified that your post was perhaps true at the time, but in my view, has since become false if one counts AI pause as a “core action relevant point” of EA.
I just think if you told people “there’s this new technology that could cause human extinction, or be a really big deal and save many lives and cause an age of wonders, should we be slow and cautious in how we develop it” most people would say yes?
I believe that people’s answers to questions like this are usually highly sensitive to how the issue is framed. If you simply presented them with the exact quote you wrote here, without explaining that “saving many lives” would likely include the lives of their loved ones, such as their elderly relatives, I agree that most would support slowing down development. However, if you instead clarified that continuing development would likely save their own lives and the lives of their family members by curing most types of diseases, and if you also emphasized that the risk of human extinction from continued development is very low (for example, 1-2%), then I think there would be a significantly higher chance that most people would support moving forward with the technology at a reasonably fast pace, though presumably with some form of regulation in place to govern the technology.
One possible response to my argument is to point to survey data that shows most people favor pausing AI. However, while I agree survey data can be useful, I don’t think it provides strong evidence in this case for the claim. This is because most people, when answering survey questions, lack sufficient context and have not spent much time thinking deeply about these complex issues. Their responses are often made without fully understanding the stakes or the relevant information. In contrast, if you look at the behavior of current legislators and government officials who are being advised by scientific experts and given roughly this same information, it does not seem that they are currently strongly in favor of pausing AI development.
I agree that people’s takes in response to surveys are very sensitive to framing and hard to interpret. I was trying to gesture at the hypothesis that many people are skeptical of future technologies, afraid of job loss, don’t trust tech, etc, even if they do sincerely value loved ones. But anyway, that’s not a crux.
I think we basically agree here, overall? I agree that my arguments here are not sufficient to support a large pause for a small reduction in risk. I don’t consider this a core point of EA, but I’m not confident in that, and don’t think you’re too unreasonable for doing so
Though while I’m skeptical of the type of unilateral pause pushed for in EA, I am much more supportive of not actively pushing capabilities to be faster, since I think the arguments that pauses are distortionary and penalise safety motivated actors, don’t apply there, and most acceleration will diffuse across the ecosystem. This makes me guess that Mechanize is net negative, so I imagine this is also a point of disagreement between us.
I agree, this seems broadly accurate. I suppose I should have clarified that your post was perhaps true at the time, but in my view, has since become false if one counts AI pause as a “core action relevant point” of EA.
I believe that people’s answers to questions like this are usually highly sensitive to how the issue is framed. If you simply presented them with the exact quote you wrote here, without explaining that “saving many lives” would likely include the lives of their loved ones, such as their elderly relatives, I agree that most would support slowing down development. However, if you instead clarified that continuing development would likely save their own lives and the lives of their family members by curing most types of diseases, and if you also emphasized that the risk of human extinction from continued development is very low (for example, 1-2%), then I think there would be a significantly higher chance that most people would support moving forward with the technology at a reasonably fast pace, though presumably with some form of regulation in place to govern the technology.
One possible response to my argument is to point to survey data that shows most people favor pausing AI. However, while I agree survey data can be useful, I don’t think it provides strong evidence in this case for the claim. This is because most people, when answering survey questions, lack sufficient context and have not spent much time thinking deeply about these complex issues. Their responses are often made without fully understanding the stakes or the relevant information. In contrast, if you look at the behavior of current legislators and government officials who are being advised by scientific experts and given roughly this same information, it does not seem that they are currently strongly in favor of pausing AI development.
I agree that people’s takes in response to surveys are very sensitive to framing and hard to interpret. I was trying to gesture at the hypothesis that many people are skeptical of future technologies, afraid of job loss, don’t trust tech, etc, even if they do sincerely value loved ones. But anyway, that’s not a crux.
I think we basically agree here, overall? I agree that my arguments here are not sufficient to support a large pause for a small reduction in risk. I don’t consider this a core point of EA, but I’m not confident in that, and don’t think you’re too unreasonable for doing so
Though while I’m skeptical of the type of unilateral pause pushed for in EA, I am much more supportive of not actively pushing capabilities to be faster, since I think the arguments that pauses are distortionary and penalise safety motivated actors, don’t apply there, and most acceleration will diffuse across the ecosystem. This makes me guess that Mechanize is net negative, so I imagine this is also a point of disagreement between us.