While I didn’t elaborate on my thoughts in the OP, essentially I was aiming to say “if you’d like to play a role in advocating for AI safety, the first steps are to gain skills so you can persuade the right people effectively. I think some people jump from “become convinced that AI is an issue” to “immediately start arguing with people on the internet”.
If you want to do that, I’d say it’s important to:
a) gain a firm understanding of AI and AI safety,
b) gain an understanding common objections and modes of thought surrounding those objections.
b) practice engaging with people in a way that actually has a positive impact (do this practice on lower-stakes issues, not AI). My experience is that positive interactions involve a lot of work and emotional labor.
(I still argue occasionally about AI on the internet and I think I’ve regretted it basically every time)
I think it makes more sense to aim for high-impact influence, where you cultivate a lot of valuable skills that gets you hired at actual AI research firms where you can then shape the culture in a way that prioritizes safety.
I think you’re mostly right, but there is a difference between arguing in order to convince the other person (what you seem to be focused on) and arguing to convince third party observers and signal the strength of your own position (what I had in mind). The latter seems to be less knowledge-intensive.
While I didn’t elaborate on my thoughts in the OP, essentially I was aiming to say “if you’d like to play a role in advocating for AI safety, the first steps are to gain skills so you can persuade the right people effectively. I think some people jump from “become convinced that AI is an issue” to “immediately start arguing with people on the internet”.
If you want to do that, I’d say it’s important to:
a) gain a firm understanding of AI and AI safety, b) gain an understanding common objections and modes of thought surrounding those objections. b) practice engaging with people in a way that actually has a positive impact (do this practice on lower-stakes issues, not AI). My experience is that positive interactions involve a lot of work and emotional labor.
(I still argue occasionally about AI on the internet and I think I’ve regretted it basically every time)
I think it makes more sense to aim for high-impact influence, where you cultivate a lot of valuable skills that gets you hired at actual AI research firms where you can then shape the culture in a way that prioritizes safety.
I think you’re mostly right, but there is a difference between arguing in order to convince the other person (what you seem to be focused on) and arguing to convince third party observers and signal the strength of your own position (what I had in mind). The latter seems to be less knowledge-intensive.