I’ll list some criticisms of EA that I heard, prior to FTX, from friends/acquaintances who I respect (which doesn’t mean that I think all of these critiques are good). I am paraphrasing a lot so might be misrepresenting some of them.
Some folks in EA are a bit too pushy to get new people to engage more. This was from a person who thought of doing good primarily in terms of their contracts with other people, supporting people in their local community, and increasing cooperation and coordination in their social groups. They also cared about helping people globally (donated some of their income to global health charities + were vegetarian) but felt like it wasn’t the only thing they cared about. They felt like often in their interactions with EAs, the other person would try to bring up the same thought experiments they had already heard in order to get rid of their “bias towards helping people close to them in space-time”. This was annoying for them. They also came from a background in law and found the emphasis on AI safety offputting because they didn’t have the technical knowledge to form an opinion on it and the arguments were often presented to them by EA students who failed to convince them, and who they thought also didn’t have good reason to believe in them.
Another person mentioned that it looked weird to them that EA spent a lot of resources on helping itself. Without looking too closely at it, it looked like the ratio of resources spent on meta EA stuff to directly impactful stuff seemed suspiciously high. Their general thoughts on communities with access to billionaire money, influence, and young people wanting to find a purpose made them assume negative things about EA community as well. This made it harder for them to take some of the EA ideas seriously. I feel sympathetic to this and feel like if I wasn’t already part of the effective altruism community and understood the value in a lot of the EA meta stuff, I would feel similarly suspicious perhaps.
Someone else mentioned that lots of EA people they met came across as young, not very wise, and quite arrogant for their level of experience and knowledge. This put them off. As one example, they had negative experiences with EAs who didn’t have any experience with ML trying to persuade others that AI x-risk was the biggest problem.
Then there was suspicion that EAs, because of their emphasis on utilitarianism, might be willing to do things like lie, break rules, push the big guy in front of the trolley, etc if it were for the “greater good”. This made them hard to trust.
Some people I have briefly talked to mainly thought EA was about earning to give by working for Wall Street, and they thought it was harmful because of that.
I didn’t hear the “EA is too elitist” or “EA isn’t diverse enough” criticisms much (i can’t think of a specific time someone brought that up as a reason they chose not to engage more with EA).
I have talked to some non-EA friends about EA stuff after the FTX crisis (including one who himself lost a lot of money that was on the platform), mostly because they sent me memes about SBF’s effective altruism. My impression was that their opinion (generally mildly positive though not personally enthusiastic) on EA did not change much as a result of FTX. This is unfortunately probably not the case for people who heard about EA for the first time because of FTX—they are more likely to assume bad things about EAs if they don’t know any in real life (and I think this is to some extent, a justified response).
I haven’t thought about this much. I am just reporting that some people I briefly talked to thought EA was mainly that and had a negative opinion of it.
Because finance people are bad people and therefore anything associated with them is bad. Or for a slightly larger chain, because money is bad, people who spend their lives seeking money are therefore bad, and anything associated with those people is bad.
Don’t overthink this. It doesn’t have to make sense, there just have to be a lot of people who think it does.
This seems counterproductively uncharitable. Wall Street in particular and finance in general is perceived by many to be an industry that is overall harmful and has negative value, and that participating in it is contributing to harm and producing very little added value for those outside of high-earning elite groups.
It makes a lot sense to me that someone who thinks the finance industry is, on net, harmful will see ETG in finance as a form of ends justify the means reasoning, without having to resort to reducing it to a caricature of “money bad = Wall Street bad = ETG bad, it doesn’t have to make sense”.
That’s literally just the same thing I said with more words. They don’t have reasons to think finance is net negative, it just is polluted with money and therefore bad.
I’ll list some criticisms of EA that I heard, prior to FTX, from friends/acquaintances who I respect (which doesn’t mean that I think all of these critiques are good). I am paraphrasing a lot so might be misrepresenting some of them.
Some folks in EA are a bit too pushy to get new people to engage more. This was from a person who thought of doing good primarily in terms of their contracts with other people, supporting people in their local community, and increasing cooperation and coordination in their social groups. They also cared about helping people globally (donated some of their income to global health charities + were vegetarian) but felt like it wasn’t the only thing they cared about. They felt like often in their interactions with EAs, the other person would try to bring up the same thought experiments they had already heard in order to get rid of their “bias towards helping people close to them in space-time”. This was annoying for them. They also came from a background in law and found the emphasis on AI safety offputting because they didn’t have the technical knowledge to form an opinion on it and the arguments were often presented to them by EA students who failed to convince them, and who they thought also didn’t have good reason to believe in them.
Another person mentioned that it looked weird to them that EA spent a lot of resources on helping itself. Without looking too closely at it, it looked like the ratio of resources spent on meta EA stuff to directly impactful stuff seemed suspiciously high. Their general thoughts on communities with access to billionaire money, influence, and young people wanting to find a purpose made them assume negative things about EA community as well. This made it harder for them to take some of the EA ideas seriously. I feel sympathetic to this and feel like if I wasn’t already part of the effective altruism community and understood the value in a lot of the EA meta stuff, I would feel similarly suspicious perhaps.
Someone else mentioned that lots of EA people they met came across as young, not very wise, and quite arrogant for their level of experience and knowledge. This put them off. As one example, they had negative experiences with EAs who didn’t have any experience with ML trying to persuade others that AI x-risk was the biggest problem.
Then there was suspicion that EAs, because of their emphasis on utilitarianism, might be willing to do things like lie, break rules, push the big guy in front of the trolley, etc if it were for the “greater good”. This made them hard to trust.
Some people I have briefly talked to mainly thought EA was about earning to give by working for Wall Street, and they thought it was harmful because of that.
I didn’t hear the “EA is too elitist” or “EA isn’t diverse enough” criticisms much (i can’t think of a specific time someone brought that up as a reason they chose not to engage more with EA).
I have talked to some non-EA friends about EA stuff after the FTX crisis (including one who himself lost a lot of money that was on the platform), mostly because they sent me memes about SBF’s effective altruism. My impression was that their opinion (generally mildly positive though not personally enthusiastic) on EA did not change much as a result of FTX. This is unfortunately probably not the case for people who heard about EA for the first time because of FTX—they are more likely to assume bad things about EAs if they don’t know any in real life (and I think this is to some extent, a justified response).
Why would it be harmful to work for Wall Street earning to give? Sincere question.
Finance is like anything else. You can have an ethically upstanding career, or you can have an ethically dubious career. Seems crazy to generalize.
I haven’t thought about this much. I am just reporting that some people I briefly talked to thought EA was mainly that and had a negative opinion of it.
Because finance people are bad people and therefore anything associated with them is bad. Or for a slightly larger chain, because money is bad, people who spend their lives seeking money are therefore bad, and anything associated with those people is bad.
Don’t overthink this. It doesn’t have to make sense, there just have to be a lot of people who think it does.
This seems counterproductively uncharitable. Wall Street in particular and finance in general is perceived by many to be an industry that is overall harmful and has negative value, and that participating in it is contributing to harm and producing very little added value for those outside of high-earning elite groups.
It makes a lot sense to me that someone who thinks the finance industry is, on net, harmful will see ETG in finance as a form of ends justify the means reasoning, without having to resort to reducing it to a caricature of “money bad = Wall Street bad = ETG bad, it doesn’t have to make sense”.
That’s literally just the same thing I said with more words. They don’t have reasons to think finance is net negative, it just is polluted with money and therefore bad.