I found Nakul’s article v interesting too but am surprised at what it led you to conclude.
I didn’t think the article was challenging the claim that doing paradigmatic EA activities was moral. I thought Nakul was suggesting that doing them wasn’t obligatory, and that the consequentialist reasons for doing them could be overridden by an individual’s projects, duties and passions. He was pushing against the idea that EA can demand that everyone support them.
It seems like your personal projects would lead to do EA activities. So I’m surprised you judge EA activities to be less moral than alternatives. Which activities and why?
I would have expected you to conclude something like “Doing EA activities isn’t morally required of everyone; for some people it isn’t the right thing to do; but for me it absolutely is the right thing to do”.
Activities that are more moral than EA for me: At the moment I think working directly on assembling and conveying knowledge in philosophy and psychology to the AI safety community has higher expected value. I’m taking the AI human compatible course at Berkeley, with Stuart Russell, I hang out at MIRI a lot, so in theory I’m in good position to do that research and some of the time I work on it. But I don’t work on it all the time, I would if I got funding for our proposal.
But actually I was referring to a counterfactual world where EA activities are less aligned with what I see as morally right than this world. There’s a dimension, call it “skepticism about utilitarianism” that reading Bernard Williams made me move along. If I moved more and more along that dimension, I’d still do EA activities, that’s all.
Your expectation is partially correct, I assign 3% to EA activities is morally required of everyone, I feel personally more required to do them than 25% (because this is the dream time, I was lucky, I’m at a high leverage position etc..), but although I think it is right for me to do them, I don’t do them because its right, and that’s my overall point.
I found Nakul’s article v interesting too but am surprised at what it led you to conclude.
I didn’t think the article was challenging the claim that doing paradigmatic EA activities was moral. I thought Nakul was suggesting that doing them wasn’t obligatory, and that the consequentialist reasons for doing them could be overridden by an individual’s projects, duties and passions. He was pushing against the idea that EA can demand that everyone support them.
It seems like your personal projects would lead to do EA activities. So I’m surprised you judge EA activities to be less moral than alternatives. Which activities and why?
I would have expected you to conclude something like “Doing EA activities isn’t morally required of everyone; for some people it isn’t the right thing to do; but for me it absolutely is the right thing to do”.
Agreed with 2 first paragraphs.
Activities that are more moral than EA for me: At the moment I think working directly on assembling and conveying knowledge in philosophy and psychology to the AI safety community has higher expected value. I’m taking the AI human compatible course at Berkeley, with Stuart Russell, I hang out at MIRI a lot, so in theory I’m in good position to do that research and some of the time I work on it. But I don’t work on it all the time, I would if I got funding for our proposal.
But actually I was referring to a counterfactual world where EA activities are less aligned with what I see as morally right than this world. There’s a dimension, call it “skepticism about utilitarianism” that reading Bernard Williams made me move along. If I moved more and more along that dimension, I’d still do EA activities, that’s all.
Your expectation is partially correct, I assign 3% to EA activities is morally required of everyone, I feel personally more required to do them than 25% (because this is the dream time, I was lucky, I’m at a high leverage position etc..), but although I think it is right for me to do them, I don’t do them because its right, and that’s my overall point.