Yepp, I agree that I am doing an intuition pump to convey my point. I think this is a reasonable approach to take because I actually think there’s much more disagreement on vibes and culture than there is on substance (I too would like AI development to go more slowly). E.g. AI safety researchers paying for ChatGPT obviously brings in a negligible amount of money for OpenAI, and so when people think about that stuff the actual cognitive process is more like “what will my purchase signal and how will it influence norms?” But that’s precisely the sort of thing that has an effect on AI safety culture independent of whether people agree or disagree on specific policies—can you imagine hacker culture developing amongst people who were boycotting computers? Hence why my takeaway at the end of the post is not “stop advocating for pauses” but rather “please consider how to have positive effects on community culture and epistemics, which might not happen by default”.
I would be keen to hear more fleshed-out versions of the passages with the valences swapped! I like the one you’ve done; although I’d note that you’re focusing on the outcomes achieved by those groups, whereas I’m focusing also on the psychologies of the people in those groups. I think the psychological part is important because, as they say, culture eats strategy for breakfast. I do think that climate activists have done a good job at getting funding into renewables; but I think alignment research is much harder to accelerate (e.g. because the metrics are much less clear, funding is less of a bottleneck, and the target is moving much faster) and so trading off a culture focused on understanding the situation clearly for more success at activism may not be the right call here even if it was there.
Yepp, I agree that I am doing an intuition pump to convey my point. I think this is a reasonable approach to take because I actually think there’s much more disagreement on vibes and culture than there is on substance (I too would like AI development to go more slowly). E.g. AI safety researchers paying for ChatGPT obviously brings in a negligible amount of money for OpenAI, and so when people think about that stuff the actual cognitive process is more like “what will my purchase signal and how will it influence norms?” But that’s precisely the sort of thing that has an effect on AI safety culture independent of whether people agree or disagree on specific policies—can you imagine hacker culture developing amongst people who were boycotting computers? Hence why my takeaway at the end of the post is not “stop advocating for pauses” but rather “please consider how to have positive effects on community culture and epistemics, which might not happen by default”.
I would be keen to hear more fleshed-out versions of the passages with the valences swapped! I like the one you’ve done; although I’d note that you’re focusing on the outcomes achieved by those groups, whereas I’m focusing also on the psychologies of the people in those groups. I think the psychological part is important because, as they say, culture eats strategy for breakfast. I do think that climate activists have done a good job at getting funding into renewables; but I think alignment research is much harder to accelerate (e.g. because the metrics are much less clear, funding is less of a bottleneck, and the target is moving much faster) and so trading off a culture focused on understanding the situation clearly for more success at activism may not be the right call here even if it was there.