I guess I have two reactions. First, which of the categories are you putting me in? My guess is you want to label me as a mop, but “contribute as little as they reasonably can in exchange” seems an inaccurate description of someone who’s strongly considering devoting their career to an EA cause; also I really enjoy talking about the weird “new things” that come up (like idk actually trade between universes during the long reflection).
My second thought is that while your story about social gradients is a plausible one, I have a more straightforward story about who EA should accept which I like more. My story is: EA should accept/reward people in proportion to (or rather, in a monotone increasing fashion of) how much good they do.* For a group that tries to do the most good, this pretty straightforwardly incentivizes doing good! Sure, there are secondary cultural effects to consider—but I do think they should be thought of as secondary to doing good.
*You can also reward trying to do good to the best of each’s ability. I think there’s a lot of merit to this approach, but might create some not-great incentives of the form “always looking like you’re trying” (regardless of whether you really are trying effectively).
I think an interesting related question is how much our social (and other incentive) gradients should prioritize people whose talents or dispositions are naturally predisposed to doing relevant EA work, versus people who are not naturally inclined for this but are morally compelled to “do what needs to be done.”
I think in one sense it feels more morally praiseworthy for people to be willing to do hard work. But in another sense, it’s (probably?) easier to recruit people for whom the pitch and associated sacrifices to do EA work is lower, and for a lot of current longtermist work (especially in research), having a natural inclination/aptitude/interest probably makes you a lot better at the work than grim determination
EA should accept/reward people in proportion to (or rather, in a monotone increasing fashion of) how much good they do.
I think this would work if one actually did it, but not if impact is distributed with long tails (e.g., power law) and people take offense to being accepted very little.
I don’t think this is an important question, it’s not like “tall people” and “short people” are a distinct cluster. There is going to be a spectrum, and you would be somewhere in the middle. But still using labels is a convenient shorthand.
So the thing that worries me is that if someone is optimizing for something different, they might reward other people for doing the same thing. The case has been on my mind recently where someone is a respected member of the community, but what they are doing is not optimal, and it would be awkward to point that out. But still necessary, even if it looses one brownie points socially.
Overall, I don’t really read minds, and I don’t know what you would or wouldn’t do.
I guess I have two reactions. First, which of the categories are you putting me in? My guess is you want to label me as a mop, but “contribute as little as they reasonably can in exchange” seems an inaccurate description of someone who’s strongly considering devoting their career to an EA cause; also I really enjoy talking about the weird “new things” that come up (like idk actually trade between universes during the long reflection).
My second thought is that while your story about social gradients is a plausible one, I have a more straightforward story about who EA should accept which I like more. My story is: EA should accept/reward people in proportion to (or rather, in a monotone increasing fashion of) how much good they do.* For a group that tries to do the most good, this pretty straightforwardly incentivizes doing good! Sure, there are secondary cultural effects to consider—but I do think they should be thought of as secondary to doing good.
*You can also reward trying to do good to the best of each’s ability. I think there’s a lot of merit to this approach, but might create some not-great incentives of the form “always looking like you’re trying” (regardless of whether you really are trying effectively).
I think an interesting related question is how much our social (and other incentive) gradients should prioritize people whose talents or dispositions are naturally predisposed to doing relevant EA work, versus people who are not naturally inclined for this but are morally compelled to “do what needs to be done.”
I think in one sense it feels more morally praiseworthy for people to be willing to do hard work. But in another sense, it’s (probably?) easier to recruit people for whom the pitch and associated sacrifices to do EA work is lower, and for a lot of current longtermist work (especially in research), having a natural inclination/aptitude/interest probably makes you a lot better at the work than grim determination
I’m curious how true this is.
I think this would work if one actually did it, but not if impact is distributed with long tails (e.g., power law) and people take offense to being accepted very little.
I don’t think this is an important question, it’s not like “tall people” and “short people” are a distinct cluster. There is going to be a spectrum, and you would be somewhere in the middle. But still using labels is a convenient shorthand.
So the thing that worries me is that if someone is optimizing for something different, they might reward other people for doing the same thing. The case has been on my mind recently where someone is a respected member of the community, but what they are doing is not optimal, and it would be awkward to point that out. But still necessary, even if it looses one brownie points socially.
Overall, I don’t really read minds, and I don’t know what you would or wouldn’t do.