I’m happy to discuss moral philosophy. (Genuinely—I enjoyed that at undergraduate level and it’s one of the fun aspects of EA.) Indeed, perhaps I’ll put some direct responses to your points into another reply. But what I was trying to get at with my piece was how EA could make some rough and ready, plausibly justifiable, short cuts through some worrying issues that seemed to be capable of paralysing EA decision-making.
I write as a sympathiser with EA—someone who has actually changed his actions based on the points made by EA. What I’m trying to do is show the world of EA—a world which has been made to look foolish by the collapse of SBF—some ways to shortcut abstruse arguments that look like navel-gazing, avoid openly endorsing ‘crazy train’ ideas, resolve cluelessness in the face of difficult utilitarian calculations and generally do much more good in the world. Your comment “Somewhere, someone has to be doing the actual work” is precisely my point: the actual work is not worrying about mental bookkeeping or thinking about Nazis—the actual work is persuading large numbers of people and achieving real things in the real world, and I’m trying to help with that work.
As I said above, I don’t claim that any of my points above are knock-down arguments for why these are the ultimately right answers. Instead I’m trying to do something different. It seems to me that EA is (or at least should be) in the business of gaining converts and doing practical good in the world. I’m trying to describe a way forward for doing that, based on the world as it actually is. The bits where I say ‘that’s how get popular support’ are a feature, not a bug: I’m not trying to persuade you to support EA—you’re already in the club! - I’m trying to give EA some tools to persuade other people, and some ways to avoid looking as if EA largely consists of oddballs.
Let me put it this way. I could have added: “and put on a suit and tie when you go to important meetings”. That’s the kind of advice I’m trying to give.
Lots of good points here—thank you.
I’m happy to discuss moral philosophy. (Genuinely—I enjoyed that at undergraduate level and it’s one of the fun aspects of EA.) Indeed, perhaps I’ll put some direct responses to your points into another reply. But what I was trying to get at with my piece was how EA could make some rough and ready, plausibly justifiable, short cuts through some worrying issues that seemed to be capable of paralysing EA decision-making.
I write as a sympathiser with EA—someone who has actually changed his actions based on the points made by EA. What I’m trying to do is show the world of EA—a world which has been made to look foolish by the collapse of SBF—some ways to shortcut abstruse arguments that look like navel-gazing, avoid openly endorsing ‘crazy train’ ideas, resolve cluelessness in the face of difficult utilitarian calculations and generally do much more good in the world. Your comment “Somewhere, someone has to be doing the actual work” is precisely my point: the actual work is not worrying about mental bookkeeping or thinking about Nazis—the actual work is persuading large numbers of people and achieving real things in the real world, and I’m trying to help with that work.
As I said above, I don’t claim that any of my points above are knock-down arguments for why these are the ultimately right answers. Instead I’m trying to do something different. It seems to me that EA is (or at least should be) in the business of gaining converts and doing practical good in the world. I’m trying to describe a way forward for doing that, based on the world as it actually is. The bits where I say ‘that’s how get popular support’ are a feature, not a bug: I’m not trying to persuade you to support EA—you’re already in the club! - I’m trying to give EA some tools to persuade other people, and some ways to avoid looking as if EA largely consists of oddballs.
Let me put it this way. I could have added: “and put on a suit and tie when you go to important meetings”. That’s the kind of advice I’m trying to give.