I respect you immensely for writing this, but some degree of altruism is required for being an effective altruist—not an infinite duty to self-sacrifice, but the understanding that you can be trusted to do so on big things, and costly signs you will do so are helpful. 10% giving is one such costly sign and it’s not required that you do all of them (I also think you overestimate the fraction of EAs who are vegan). However I think the disjunction between wanting the best for the world and wanting to have a high profile by improving the world occurs everywhere; in the fairly plausible world where AI alignment is impossible, your most effective action is probably either not working on AI or being subtly so bad at it that the field suffers, neither of which will win you much status (assuming you can’t prove that alignment is impossible). This is a general instance of the problem outlined here: “I am most motivated by the prospect of creating a radically better world as opposed to securing our current one from catastrophe”. A biased motivation combined with the unilateralist’s curse can easily give your actions negative expected utility but positive expected status-payoff: you don’t lose face if everyone goes extinct. There are lots of plausible real examples of this, like geo-engineering or gain-of-function research. Which way you’d fall on these questions in practice is a much better test of whether you’re “actually EA” than whether you buy cheap things.
On a more institutionally level, it is unhelpful for EA to become associated with narcissism (which in some circles it already is). Since the cost is borne by the movement not the individual we expect misalignment until being EA is harmful to your reputation, so some degree of excluding narcissists with marginally positive expected personal impact is warranted.
I respect you immensely for writing this, but some degree of altruism is required for being an effective altruist—not an infinite duty to self-sacrifice, but the understanding that you can be trusted to do so on big things, and costly signs you will do so are helpful. 10% giving is one such costly sign and it’s not required that you do all of them (I also think you overestimate the fraction of EAs who are vegan). However I think the disjunction between wanting the best for the world and wanting to have a high profile by improving the world occurs everywhere; in the fairly plausible world where AI alignment is impossible, your most effective action is probably either not working on AI or being subtly so bad at it that the field suffers, neither of which will win you much status (assuming you can’t prove that alignment is impossible). This is a general instance of the problem outlined here: “I am most motivated by the prospect of creating a radically better world as opposed to securing our current one from catastrophe”. A biased motivation combined with the unilateralist’s curse can easily give your actions negative expected utility but positive expected status-payoff: you don’t lose face if everyone goes extinct. There are lots of plausible real examples of this, like geo-engineering or gain-of-function research. Which way you’d fall on these questions in practice is a much better test of whether you’re “actually EA” than whether you buy cheap things.
On a more institutionally level, it is unhelpful for EA to become associated with narcissism (which in some circles it already is). Since the cost is borne by the movement not the individual we expect misalignment until being EA is harmful to your reputation, so some degree of excluding narcissists with marginally positive expected personal impact is warranted.