I donât know, but my best guess is that âjanitor at MIRIâ-type examples reinforce a certain vibe people donât like â the notion that even âlower-statusâ jobs at certain orgs are in some way elevated compared to other jobs, and the implication (however unintended) that someone should be happy to drop some more fulfilling/âinteresting job outside of EA to become MIRIâs janitor (if theyâd be good).
I think your example would hold for someone donating a few hundred dollars to MIRI (which buys roughly 10^-4 additional researchers), without triggering the same ideas. Same goes for âcontributing three useful LessWrong comments on posts about AIâ, âgiving Superintelligence to one friendâ, etc. These examples are nice in that they also work for people who donât want to live in the Bay, are happy in their current jobs, etc.
Anyway, thatâs just a guess, which doubles as a critique of the shortform post. But I did upvote the post, because I liked this bit:
But the âcorrectâ framing (I claim) would look at the absolute scale, and consider stuff like we are a) among the first 100 billion or so people and we hope there will one day be quadrillions b) (most) EAs are unusually well-placed within this already very privileged set and c) within that even smaller subset again, we try unusually hard to have a long term impact, so that also counts for something.
I agree that the vibe youâre describing tends to be a bit cultish precisely because people take it too far. That said, it seems right that low prestige jobs within crucially needed teams can be more impactful than high-prestige jobs further away from the action. (Iâm making a general point; Iâm not saying that MIRI is necessarily a great example for âwhere things matter,â nor am I saying the opposite.) In particular, personal assistant strikes me as an example of a highly impactful role (because it requires a hard-to-replace skillset).
(Edit: I donât expect you to necessarily disagree with any of that, since you were just giving a plausible explanation for why the comment above may have turned off some people.)
I agree with this, and also I did try emphasizing that I was only using MIRI as an example. Do you think the post would be better if I replaced MIRI with a hypothetical example? The problem with that is that then the differences would be less visceral.
FWIW Iâm also skeptical of naive ex ante differences of >~2 orders of magnitude between causes, after accounting for meta-EA effects. That said, I also think maybe our culture will be better if we celebrate doing naively good things over doing things that are externally high status.*
But I donât feel too strongly, main point of the shortform was just that I talk to some people who are disillusioned because they feel like EA tells them that their jobs are less important than other jobs, and Iâm just like, whoa, thatâs just such a weird impression on an absolute scale (like knowing that you won a million dollars in a lottery but being sad that your friend won a billion). Iâll think about how to reframe the post so itâs less likely to invite such relative comparisons, but I also think denying the importance of the relative comparisons is the point.
*I also do somewhat buy arguments by you and Holden Karnofsky and others that itâs more important for skill/âcareer capital etc building to try to do really hard things even if theyâre naively useless. The phrase âmixed strategyâ comes to mind.
That said, I also think maybe our culture will be better if we celebrate doing naively good things over doing things that are externally high status.
This is a reasonable theory. But I think there are lots of naively good things that are broadly accessible to people in a way that âjanitor at MIRIâ isnât, hence my critique.
(Not that this one Shortform post is doing anything wrong on its own â I just hear this kind of example used too often relative to examples like the ones I mentioned, including in this popular post, though the âsweep the floors at CEAâ example was a bit less central there.)
Hmm I think the most likely way downside stuff will happen is by flipping the sign rather than reducing the magnitude, curious why your model is different.
I donât know, but my best guess is that âjanitor at MIRIâ-type examples reinforce a certain vibe people donât like â the notion that even âlower-statusâ jobs at certain orgs are in some way elevated compared to other jobs, and the implication (however unintended) that someone should be happy to drop some more fulfilling/âinteresting job outside of EA to become MIRIâs janitor (if theyâd be good).
I think your example would hold for someone donating a few hundred dollars to MIRI (which buys roughly 10^-4 additional researchers), without triggering the same ideas. Same goes for âcontributing three useful LessWrong comments on posts about AIâ, âgiving Superintelligence to one friendâ, etc. These examples are nice in that they also work for people who donât want to live in the Bay, are happy in their current jobs, etc.
Anyway, thatâs just a guess, which doubles as a critique of the shortform post. But I did upvote the post, because I liked this bit:
I agree that the vibe youâre describing tends to be a bit cultish precisely because people take it too far. That said, it seems right that low prestige jobs within crucially needed teams can be more impactful than high-prestige jobs further away from the action. (Iâm making a general point; Iâm not saying that MIRI is necessarily a great example for âwhere things matter,â nor am I saying the opposite.) In particular, personal assistant strikes me as an example of a highly impactful role (because it requires a hard-to-replace skillset).
(Edit: I donât expect you to necessarily disagree with any of that, since you were just giving a plausible explanation for why the comment above may have turned off some people.)
I agree with this, and also I did try emphasizing that I was only using MIRI as an example. Do you think the post would be better if I replaced MIRI with a hypothetical example? The problem with that is that then the differences would be less visceral.
FWIW Iâm also skeptical of naive ex ante differences of >~2 orders of magnitude between causes, after accounting for meta-EA effects. That said, I also think maybe our culture will be better if we celebrate doing naively good things over doing things that are externally high status.*
But I donât feel too strongly, main point of the shortform was just that I talk to some people who are disillusioned because they feel like EA tells them that their jobs are less important than other jobs, and Iâm just like, whoa, thatâs just such a weird impression on an absolute scale (like knowing that you won a million dollars in a lottery but being sad that your friend won a billion). Iâll think about how to reframe the post so itâs less likely to invite such relative comparisons, but I also think denying the importance of the relative comparisons is the point.
*I also do somewhat buy arguments by you and Holden Karnofsky and others that itâs more important for skill/âcareer capital etc building to try to do really hard things even if theyâre naively useless. The phrase âmixed strategyâ comes to mind.
This is a reasonable theory. But I think there are lots of naively good things that are broadly accessible to people in a way that âjanitor at MIRIâ isnât, hence my critique.
(Not that this one Shortform post is doing anything wrong on its own â I just hear this kind of example used too often relative to examples like the ones I mentioned, including in this popular post, though the âsweep the floors at CEAâ example was a bit less central there.)
I feel like the meta effects are likely to exaggerate the differences, not reduce them? Surprised about the line of reasoning here.
Hmm I think the most likely way downside stuff will happen is by flipping the sign rather than reducing the magnitude, curious why your model is different.
I wrote a bit more in the linked shortform.