One “classic internet essay” analyzing this phenomenon is Geeks, MOPs, and sociopaths in subculture evolution. A phrase commonly used in EA would be “keep EA weird”. The point is that adding too many people like Eric would dillute EA, and make the social incentive gradients point to places we don’t want them to point to.
I really enjoy socializing and working with other EAs, more so than with any other community I’ve found. The career outcomes that are all the way up (and pretty far to the right) are ones where I do cool work at a longtermist office space, hanging out with the awesome people there during lunch and after work
My understanding is that this is a common desire. I’m not sure what proportion of harcore EAs vs chill people would be optimal, and I could imagine it being 100% hardcore EAs.
I guess I have two reactions. First, which of the categories are you putting me in? My guess is you want to label me as a mop, but “contribute as little as they reasonably can in exchange” seems an inaccurate description of someone who’s strongly considering devoting their career to an EA cause; also I really enjoy talking about the weird “new things” that come up (like idk actually trade between universes during the long reflection).
My second thought is that while your story about social gradients is a plausible one, I have a more straightforward story about who EA should accept which I like more. My story is: EA should accept/reward people in proportion to (or rather, in a monotone increasing fashion of) how much good they do.* For a group that tries to do the most good, this pretty straightforwardly incentivizes doing good! Sure, there are secondary cultural effects to consider—but I do think they should be thought of as secondary to doing good.
*You can also reward trying to do good to the best of each’s ability. I think there’s a lot of merit to this approach, but might create some not-great incentives of the form “always looking like you’re trying” (regardless of whether you really are trying effectively).
I think an interesting related question is how much our social (and other incentive) gradients should prioritize people whose talents or dispositions are naturally predisposed to doing relevant EA work, versus people who are not naturally inclined for this but are morally compelled to “do what needs to be done.”
I think in one sense it feels more morally praiseworthy for people to be willing to do hard work. But in another sense, it’s (probably?) easier to recruit people for whom the pitch and associated sacrifices to do EA work is lower, and for a lot of current longtermist work (especially in research), having a natural inclination/aptitude/interest probably makes you a lot better at the work than grim determination
EA should accept/reward people in proportion to (or rather, in a monotone increasing fashion of) how much good they do.
I think this would work if one actually did it, but not if impact is distributed with long tails (e.g., power law) and people take offense to being accepted very little.
I don’t think this is an important question, it’s not like “tall people” and “short people” are a distinct cluster. There is going to be a spectrum, and you would be somewhere in the middle. But still using labels is a convenient shorthand.
So the thing that worries me is that if someone is optimizing for something different, they might reward other people for doing the same thing. The case has been on my mind recently where someone is a respected member of the community, but what they are doing is not optimal, and it would be awkward to point that out. But still necessary, even if it looses one brownie points socially.
Overall, I don’t really read minds, and I don’t know what you would or wouldn’t do.
As usual, it would be great to see downvotes accompanied by reasons for downvoting, especially in the case of NegativeNuno’s comments, since it’s an account literally created to provide frank criticism with a clear disclaimer in its bio.
One “classic internet essay” analyzing this phenomenon is Geeks, MOPs, and sociopaths in subculture evolution. A phrase commonly used in EA would be “keep EA weird”. The point is that adding too many people like Eric would dillute EA, and make the social incentive gradients point to places we don’t want them to point to.
My understanding is that this is a common desire. I’m not sure what proportion of harcore EAs vs chill people would be optimal, and I could imagine it being 100% hardcore EAs.
I guess I have two reactions. First, which of the categories are you putting me in? My guess is you want to label me as a mop, but “contribute as little as they reasonably can in exchange” seems an inaccurate description of someone who’s strongly considering devoting their career to an EA cause; also I really enjoy talking about the weird “new things” that come up (like idk actually trade between universes during the long reflection).
My second thought is that while your story about social gradients is a plausible one, I have a more straightforward story about who EA should accept which I like more. My story is: EA should accept/reward people in proportion to (or rather, in a monotone increasing fashion of) how much good they do.* For a group that tries to do the most good, this pretty straightforwardly incentivizes doing good! Sure, there are secondary cultural effects to consider—but I do think they should be thought of as secondary to doing good.
*You can also reward trying to do good to the best of each’s ability. I think there’s a lot of merit to this approach, but might create some not-great incentives of the form “always looking like you’re trying” (regardless of whether you really are trying effectively).
I think an interesting related question is how much our social (and other incentive) gradients should prioritize people whose talents or dispositions are naturally predisposed to doing relevant EA work, versus people who are not naturally inclined for this but are morally compelled to “do what needs to be done.”
I think in one sense it feels more morally praiseworthy for people to be willing to do hard work. But in another sense, it’s (probably?) easier to recruit people for whom the pitch and associated sacrifices to do EA work is lower, and for a lot of current longtermist work (especially in research), having a natural inclination/aptitude/interest probably makes you a lot better at the work than grim determination
I’m curious how true this is.
I think this would work if one actually did it, but not if impact is distributed with long tails (e.g., power law) and people take offense to being accepted very little.
I don’t think this is an important question, it’s not like “tall people” and “short people” are a distinct cluster. There is going to be a spectrum, and you would be somewhere in the middle. But still using labels is a convenient shorthand.
So the thing that worries me is that if someone is optimizing for something different, they might reward other people for doing the same thing. The case has been on my mind recently where someone is a respected member of the community, but what they are doing is not optimal, and it would be awkward to point that out. But still necessary, even if it looses one brownie points socially.
Overall, I don’t really read minds, and I don’t know what you would or wouldn’t do.
As usual, it would be great to see downvotes accompanied by reasons for downvoting, especially in the case of NegativeNuno’s comments, since it’s an account literally created to provide frank criticism with a clear disclaimer in its bio.