You know in some sense I see EA as a support group for crazies. Normie reality involves accepting a lot of things as OK that are not OK. If you care a lot in any visceral sense about x risk, or animal welfare, then you are in for a lot of psychic difficulty coping with the world around you. Hell, even just caring about the shit that isn’t remotely weird, like effective poverty interventions, is enough to cause psychic damage trying to cope with the way that your entire environment claims to care about helping people and behaviorally just doesn’t.
So when I see similar patterns and norms applied to capabilities research, that outside of EA just get applied to everything (“oh you work in gain of function? that sounds neat”), it gives me the jeebs.
This doesn’t invalidate the kind of math @richard_ngo is doing ala “well if we get 1 safety researcher for each 5 capabilities researchers we tolerate/enable, that seems worth it”. But I would like less jeebs.
You know in some sense I see EA as a support group for crazies. Normie reality involves accepting a lot of things as OK that are not OK. If you care a lot in any visceral sense about x risk, or animal welfare, then you are in for a lot of psychic difficulty coping with the world around you. Hell, even just caring about the shit that isn’t remotely weird, like effective poverty interventions, is enough to cause psychic damage trying to cope with the way that your entire environment claims to care about helping people and behaviorally just doesn’t.
So when I see similar patterns and norms applied to capabilities research, that outside of EA just get applied to everything (“oh you work in gain of function? that sounds neat”), it gives me the jeebs.
This doesn’t invalidate the kind of math @richard_ngo is doing ala “well if we get 1 safety researcher for each 5 capabilities researchers we tolerate/enable, that seems worth it”. But I would like less jeebs.