I think the community should welcome you. I share many of your motivations. You seem “altruistic” in the way that “counts” for most people’s purposes.
I really like this question because it raises the uncomfortable topic of what motivations people actually have being different than what we imagine they are. It seems to me that the community should not lie to itself about… well, anything, but least of all this; and I do suspect there’s a lot of self deception going on.
I think there are many EAs with “pure” motivations. I don’t know what the distribution of motivational purity is, but I don’t expect to be a modal EA.
I came via osmosis from the rat community (partly due to EA caring about AI safety and x-risk). I was never an altruistic person (I’m still not).
I wouldn’t have joined a movement focusing on improving lives for the global poor (I have donated to GiveWell’s Maximum Impact Fund, but that’s due to value drift after joining EA).
This is to say that I think that pure EAs exist, and I think that’s fine, and I think they should be encouraged.
Being vegan
Frugal living, etc. are all fine IMO.
I’m just against using them as a purity tests. If the kind of people we want to recruit are people strongly committed to improving the world, then I don’t think those are strong (or even useful) signals.
I think ambition is a much stronger signal of someone who actually wants to make an impact than veganism/frugality/other moral fashions.
As long as we broadly agree on what a better world looks like (more flourishing, less suffering), then ambitious people seem valuable.
Even without strict moral alignment, we can pursue pareto optimal improvements on what we consider a brighter world?
Like most humans probably agree a lot more on what is moral than they disagree, and we can make the world better in ways that we both agree on?
I don’t think that e.g. not caring about animal welfare is that big an obstacle to cooperating with other EAs? I don’t want animals to suffer, and I wouldn’t hinder efforts to improve animal welfare. I’d just work on issues that are more important to me.
I think the community should welcome you. I share many of your motivations. You seem “altruistic” in the way that “counts” for most people’s purposes.
I really like this question because it raises the uncomfortable topic of what motivations people actually have being different than what we imagine they are. It seems to me that the community should not lie to itself about… well, anything, but least of all this; and I do suspect there’s a lot of self deception going on.
I think there are many EAs with “pure” motivations. I don’t know what the distribution of motivational purity is, but I don’t expect to be a modal EA.
I came via osmosis from the rat community (partly due to EA caring about AI safety and x-risk). I was never an altruistic person (I’m still not).
I wouldn’t have joined a movement focusing on improving lives for the global poor (I have donated to GiveWell’s Maximum Impact Fund, but that’s due to value drift after joining EA).
This is to say that I think that pure EAs exist, and I think that’s fine, and I think they should be encouraged.
Being vegan
Frugal living, etc. are all fine IMO.
I’m just against using them as a purity tests. If the kind of people we want to recruit are people strongly committed to improving the world, then I don’t think those are strong (or even useful) signals.
I think ambition is a much stronger signal of someone who actually wants to make an impact than veganism/frugality/other moral fashions.
As long as we broadly agree on what a better world looks like (more flourishing, less suffering), then ambitious people seem valuable.
Even without strict moral alignment, we can pursue pareto optimal improvements on what we consider a brighter world?
Like most humans probably agree a lot more on what is moral than they disagree, and we can make the world better in ways that we both agree on?
I don’t think that e.g. not caring about animal welfare is that big an obstacle to cooperating with other EAs? I don’t want animals to suffer, and I wouldn’t hinder efforts to improve animal welfare. I’d just work on issues that are more important to me.
Very compatible with “big tent” EA IMO.