Thanks; that makes sense. I think part of the background is about potential downsides of an EA branded organization—especially one that is externally seen (rightly or not) as the flagship EA org—going into a space with (possibly) high levels of interpersonal harm and reduced levers to address it. I don’t find the Copenhagen Interpretation of Ethics as generally convincing as many here do. Yet this strikes me as a case in which EA could easily end up taking the blame for a lot of stuff it has little control over.
I’d update more in favor if CHSP split off from CEA and EVF, and even more in favor if the AI-safety casework operation could somehow have even greater separation from EA.
Thanks; that makes sense. I think part of the background is about potential downsides of an EA branded organization—especially one that is externally seen (rightly or not) as the flagship EA org—going into a space with (possibly) high levels of interpersonal harm and reduced levers to address it. I don’t find the Copenhagen Interpretation of Ethics as generally convincing as many here do. Yet this strikes me as a case in which EA could easily end up taking the blame for a lot of stuff it has little control over.
I’d update more in favor if CHSP split off from CEA and EVF, and even more in favor if the AI-safety casework operation could somehow have even greater separation from EA.