I’m curious about how CHSP’s practical ability to address “concerns about individuals in the AI safety space” might compare to its abilities in EA spaces. Particularly, it seems that the list of practical things CHSP could do about a problematic individual in the non-EA AI safety space could be significantly more limited than for someone in the EA space (e.g., banning them from CEA events).
I think as an overall gloss, it’s absolutely true that we have fewer levers in the AI Safety space. There are two sets of reasons why I think it’s worth considering anyway:
Impact—in a basic kind of “high importance can balance out lower tractability” way, we don’t want to only look where the streetlight is, and it’s possible that the AI Safety space will seem to us sufficiently high impact to aim some of our energy there
Don’t want to underestimate the levers—we have fewer explicit moves to make in the broader AI Safety space (e.g. disallowing people from events), but there is both a high overlap with EA and my guess is that some set of people in a new space will appreciate people who have thought about community management a lot giving thoughts / advice / sharing models and so on.
But both of these could be insufficient for a decision to put more of our effort there, and it remains to be seen.
Thanks; that makes sense. I think part of the background is about potential downsides of an EA branded organization—especially one that is externally seen (rightly or not) as the flagship EA org—going into a space with (possibly) high levels of interpersonal harm and reduced levers to address it. I don’t find the Copenhagen Interpretation of Ethics as generally convincing as many here do. Yet this strikes me as a case in which EA could easily end up taking the blame for a lot of stuff it has little control over.
I’d update more in favor if CHSP split off from CEA and EVF, and even more in favor if the AI-safety casework operation could somehow have even greater separation from EA.
I’m curious about how CHSP’s practical ability to address “concerns about individuals in the AI safety space” might compare to its abilities in EA spaces. Particularly, it seems that the list of practical things CHSP could do about a problematic individual in the non-EA AI safety space could be significantly more limited than for someone in the EA space (e.g., banning them from CEA events).
I think as an overall gloss, it’s absolutely true that we have fewer levers in the AI Safety space. There are two sets of reasons why I think it’s worth considering anyway:
Impact—in a basic kind of “high importance can balance out lower tractability” way, we don’t want to only look where the streetlight is, and it’s possible that the AI Safety space will seem to us sufficiently high impact to aim some of our energy there
Don’t want to underestimate the levers—we have fewer explicit moves to make in the broader AI Safety space (e.g. disallowing people from events), but there is both a high overlap with EA and my guess is that some set of people in a new space will appreciate people who have thought about community management a lot giving thoughts / advice / sharing models and so on.
But both of these could be insufficient for a decision to put more of our effort there, and it remains to be seen.
Thanks; that makes sense. I think part of the background is about potential downsides of an EA branded organization—especially one that is externally seen (rightly or not) as the flagship EA org—going into a space with (possibly) high levels of interpersonal harm and reduced levers to address it. I don’t find the Copenhagen Interpretation of Ethics as generally convincing as many here do. Yet this strikes me as a case in which EA could easily end up taking the blame for a lot of stuff it has little control over.
I’d update more in favor if CHSP split off from CEA and EVF, and even more in favor if the AI-safety casework operation could somehow have even greater separation from EA.