This is a great and interesting post! Thanks for sharing. I thought Scott’s arguments we’re really convincing but you updated me away from them.
Some small notes:
Under ‘Convincing people doesn’t seem that hard’:
“I don’t remember ever having any trouble discussing AI risk with random strangers. ”
We have a wildly different experience! I feel like with every time I try to explain it to friends or family they think I’m crazy. They don’t believe it at all. But perhaps I’m just really bad at explaining it. This is why I’m pretty pessimistic it’s easy to convince people. I still don’t want to give up on it though.
“I arrogantly think I could write a broadly compelling and accessible case for AI risk”
Please do this! I would love to see it. We need more easily accessible introductions to AI risk. If it can help me become good at explaining the issue, that would be amazing.
A question that’s perhaps a little less relevant: I think Scott made a metaphor once that AI safety folks shouldn’t be like “climate activists” fighting against “fossil fuel companies” (AI capabilities folks). If coordination is possible, what would be a good metaphor? Are there other industries with capabilities and safety people working together?
This is a great and interesting post! Thanks for sharing. I thought Scott’s arguments we’re really convincing but you updated me away from them. Some small notes:
Under ‘Convincing people doesn’t seem that hard’: “I don’t remember ever having any trouble discussing AI risk with random strangers. ” We have a wildly different experience! I feel like with every time I try to explain it to friends or family they think I’m crazy. They don’t believe it at all. But perhaps I’m just really bad at explaining it. This is why I’m pretty pessimistic it’s easy to convince people. I still don’t want to give up on it though.
“I arrogantly think I could write a broadly compelling and accessible case for AI risk” Please do this! I would love to see it. We need more easily accessible introductions to AI risk. If it can help me become good at explaining the issue, that would be amazing.
A question that’s perhaps a little less relevant: I think Scott made a metaphor once that AI safety folks shouldn’t be like “climate activists” fighting against “fossil fuel companies” (AI capabilities folks). If coordination is possible, what would be a good metaphor? Are there other industries with capabilities and safety people working together?