Is ostracization strategically workable? It seems like the safety community is much smaller than the capabilities community, and so ostracization (except of the most reckless capabilities researchers) could lead to capabilities people reacting in such a way that net turns people away from alignment work, or otherwise hurts the long-term strategic picture.
“It was disappointing to see that in this recent report by CSET, the default (mainstream) assumption that continued progress in AI capabilities is important was never questioned. Indeed, AI alignment/safety/x-risk is not mentioned once, and all the policy recommendations are to do with accelerating/maintaining the growth of AI capabilities! This coming from an org that OpenPhil has given over$50Mto set up.”
I’m comfortable publicly criticising big orgs (I feel that I am independent enough for this), but would be less comfortable publicly criticising individual researchers (I’d be more inclined to try and persuade them to change course toward alignment; I have been trying to sow some seeds in this regard recently with some people keen on creating AGI that I’ve met).
to this point, why don’t we take the opposite strategy? [even more] celebration of capabilities research and researchers. this would probably do a lot to ingraciate us.
It seems like the safety community is much smaller than the capabilities community
my model is that EAs are the coolest and smartest people in the world and that status among them matters to people. so this argument seems weird to me for the same reason that it would be weird if you argued that young earth creationists shouldn’t be low status in the community since there are so many of them.
i mean there seems to be a very considerable EA to capabilities pipeline, even.
i mean if i understand your argument, it can just be applied to anything. shitheads are in the global majority on like any dimension.
EAs may be the smartest people in your or my social circle, but they are likely not be the smartest people in the social circles of top ML people, for better or for worse. I suspect “coolest” is less well-defined and less commonly shared as a concept, as well.
yes i dont actually think that EAs are the globally highest status in the group in the world. my point here is that local status among EAs does matter to people; absolute numbers of “people in the world who agree with x” seems like a consideration that can be completely misleading in many cases. an implicit theory of change probably needs to be quite focused on local status.
i mean there’s a compelling argument i’m vegan due to social pressure from the world’s smartest and coolest people. i want the smartest and coolest people in the world to like me and being vegan sure seems to matter there. i don’t buy an argument that the smartest and coolest people in the world should do less to align status among them with animal welfare. they seem to be quite locally effective at persuading people.
like if you think about the people you personally know, who seem to influence people around them (including yourself) to be much more ethical, i would be quite surprised to learn that hugbox norms got them there.
Is ostracization strategically workable? It seems like the safety community is much smaller than the capabilities community, and so ostracization (except of the most reckless capabilities researchers) could lead to capabilities people reacting in such a way that net turns people away from alignment work, or otherwise hurts the long-term strategic picture.
As a recent counterpoint to some collaborationist messages: https://forum.effectivealtruism.org/posts/KoWW2cc6HezbeDmYE/greg_colbourn-s-shortform?commentId=Cus6idrdtH548XSKZ
“It was disappointing to see that in this recent report by CSET, the default (mainstream) assumption that continued progress in AI capabilities is important was never questioned. Indeed, AI alignment/safety/x-risk is not mentioned once, and all the policy recommendations are to do with accelerating/maintaining the growth of AI capabilities! This coming from an org that OpenPhil has given over $50M to set up.”
I’m comfortable publicly criticising big orgs (I feel that I am independent enough for this), but would be less comfortable publicly criticising individual researchers (I’d be more inclined to try and persuade them to change course toward alignment; I have been trying to sow some seeds in this regard recently with some people keen on creating AGI that I’ve met).
yeah this is really alarming and aligns with my least possible charitable interpretation of my feelings / data.
it would help if i had a better picture of the size of the EA → capabilities pipeline relative to not-EA → capabilities pipeline.
to this point, why don’t we take the opposite strategy? [even more] celebration of capabilities research and researchers. this would probably do a lot to ingraciate us.
my model is that EAs are the coolest and smartest people in the world and that status among them matters to people. so this argument seems weird to me for the same reason that it would be weird if you argued that young earth creationists shouldn’t be low status in the community since there are so many of them.
i mean there seems to be a very considerable EA to capabilities pipeline, even.
i mean if i understand your argument, it can just be applied to anything. shitheads are in the global majority on like any dimension.
EAs may be the smartest people in your or my social circle, but they are likely not be the smartest people in the social circles of top ML people, for better or for worse. I suspect “coolest” is less well-defined and less commonly shared as a concept, as well.
yes i dont actually think that EAs are the globally highest status in the group in the world. my point here is that local status among EAs does matter to people; absolute numbers of “people in the world who agree with x” seems like a consideration that can be completely misleading in many cases. an implicit theory of change probably needs to be quite focused on local status.
i mean there’s a compelling argument i’m vegan due to social pressure from the world’s smartest and coolest people. i want the smartest and coolest people in the world to like me and being vegan sure seems to matter there. i don’t buy an argument that the smartest and coolest people in the world should do less to align status among them with animal welfare. they seem to be quite locally effective at persuading people.
like if you think about the people you personally know, who seem to influence people around them (including yourself) to be much more ethical, i would be quite surprised to learn that hugbox norms got them there.