Thanks for the question, Brian. I’m a big fan of the effective altruism movement and have tracked it for some time. That said, I am by no means an expert, so my answers are those of a casual observer. Caveat lector!
THINGS I LIKE:
*Keeping it simple: I like forums where people can stress-test their ideas, assumptions, and arguments in the service of pursing good. The more rational, evidence-based decision makers we have, the better off the world will be, whether in non-profit work or any other field. EA provides concepts and tools, as well as a community within which to test them. Last but not least, the EA movement encourages people to think deeply about their impact in and on the world. This is a wonderful thing.
THINGS I HAVE PONDERED:
*Does the EA community tend to overemphasize philanthropy? If so, why? If you look at the etymology of phil-anthropy, it is literally “man-loving.” Many of the causes favored in the EA community seem to focus on the well-being of humans (and animals). While I strongly support causes that focus on human well-being—psychedelic science is certainly an example -- I simultaneously believe that there are many worthwhile causes with measurable benefits that don’t have humans as the sole or primary beneficiary. This is why I always refer to my non-profit work as “non-profit work” and never as philanthropy. I dislike humankind a good portion of every week. We’ve made a fantastic mess of things.
*Do some in EA inadvertently select non-profits that are the least likely to survive? This could be a total misread, but I have come across a few passages like the below from FoundersPledge, and bolding is mine. I should note that I agree with much of their other writing:
Our research conclusions do not imply that one nonprofit does more important work than another, or that a particular cause is more worthy of support than another. They instead reflect our overall view of which funding opportunities at nonprofits could currently use extra funds most effectively.
This is because we aim to recommend to our members funding opportunities with a maximum counterfactual impact. That is, our goal is to recommend opportunities where extra funding by our members would make the largest difference compared to if they provided no extra funding. Paradoxically, this implies that if a nonprofit does high-impact work but is in addition very successful at raising funds for that work, we should not recommend any funding opportunities at that nonprofit.
###
In the for-profit startup world, if you invest in the seed round of a company and the startup can’t raise a subsequent Series A, they are toast and the value of your investment goes to zero. No one passes Go.
In my experience, cash flow and donations are also the lifeblood of 99.9% of non-profits. I’ve seen multiple non-profit projects fail because they were ineffective at raising funds. It’s fundamental.
In some cases, it’s not a total red flag. Example: difficulty in raising funds is due to systemic or technical issues (e.g., a scientist is asked by the administration to raise his/her own funds for a complex scientific study within a small university department). If, on the other hand, a non-profit has someone in “development” (fundraising) and they still can’t raise funds, they are simply bad at raising money.
For me, an inability to raise funds effectively would be a disqualifier, not a qualifier. It doesn’t matter how good someone’s project is, if it requires two years to reach fruition and they run out of cash six months after taking your donation, it’s a failure IMHO. Furthermore, it’s been my experience that the non-profits that are worst at fundraising are never the most effective at using funds.
*Is it possible that some in EA over-fetishize measurement? I am all for quantification, but I think there are some risks of worshipping the gods of many decimal points, or immediately scoffing at variables that are harder to peg numbers onto. A few thoughts:
Probabilistic thinking and making predictions/bets with incomplete information is an incredible power worth developing. Think poker or blackjack versus chess. I’m not sure this gets enough attention within the EA community, and 2020 plus COVID certainly showed us that the majority of humans fail in this department. I haven’t read it, but I’ve heard good things about Annie Duke’s book, Thinking in Bets: Making Smarter Decisions When You Don’t Have All the Facts.
It’s easy to fake important with precision. For example, let’s say someone hands out 100 surveys to strangers to assess anxiety levels before and after eating free frozen yogurt. There are a few possible design problems already, but their abstract later includes something like “Administration of the frygrt270 intervention [frozen yogurt] decreased mean self-reported anxiety by 23.742% with a standard deviation of...” Never mind that people weren’t given guidance on how to rate themselves, they got more free yogurt if they showed “good results,” etc. It’s easy to dress up sloppiness with numbers, even if the math checks out. Well quantified does not mean well reasoned or well done.
What’s the ideal ratio of analysis to action? When should people be held accountable to some form of action? There are always more spectators than people on the field, and that’s OK. But I think it’s worth being aware of incentives that might keep people off of the field. In the EA community, people appear to be rewarded for well-worded debate and argument. There is social reinforcement when one engages well with the community. Could it be that overly engaging makes one less likely to be an effective altruist in the real world? For example, is the guy constantly racking up karma online within EA really being more “effective” than the woman who just helps one old lady across the street per day? Put another way, is mildly—or massively—ineffective altruism in practice better than effective altruism in theory? The word play and semantic jousting is fun, but it’s the real-world results that matter at the end of the day. I think it’s worth asking (and perhaps it’s been asked and figured out already!) how the community can hold members accountable in some fashion, to ensure that those talking about effective altruism are actually putting skin the game in the wider world. That would really do a lot of good.
Thanks for the thoughtful answers! Seems like you’ve pondered quite a bit on EA. Here are my comments and reactions, if you or others would like to read them:
On “*Does the EA community tend to overemphasize philanthropy? If so, why?...”
That’s the first time I heard of the etymology for philanthropy. Anyway, I think what you meant here is that you think the EA community overemphasizes working on causes that mainly help humans and farm animals, at the expense of other causes that help other beneficiaries, i.e. wild animals or the environment.
To some extent you are right, but maybe you’re not aware that some people and organizations in the EA community are also doing important work for wild animals and the environment. There are two EA-aligned organizations working on wild animal welfare, which are Animal Ethics and Wild Animal Initative.
Wild Animal Initiative became a top charity of Animal Charity Evaluators last year, and they focus on helping scientists, grantmakers, and decision-makers investigate important and understudied questions about wild animal welfare. You might be interested to read their research or donate to them. They wrote this article on trophic interactions, which you might be interested in given that you mentioned trophic cascades in a separate answer.
For the environment, the EA-aligned organization Founders Pledge has done research into what are the highest-impact funding opportunities for climate change here. I’m not an expert here, but it’s quite possible that these organizations may have a larger positive effect long-term for biodiversity and preventing further environmental damage than the Amazon Conservation Team, which you support.
On “*Do some in EA inadvertently select non-profits that are the least likely to survive?”
When you said “For me, an inability to raise funds effectively would be a disqualifier, not a qualifier”, I think it’s quite possible that the most evidence-based and effective charities are not the ones who can raise the funds most effectively to fill all of their funding gaps. Many non-EA recommended charities use disingenuous methods or work on heart-tugging causes to get funding, while EA recommended charities like GiveWell’s top charities work on less popular causes and do less disingenuous marketing. So in a way, I could see how EA does select non-profits that are less likely to survive, but that’s also a sign that EA is donating to the best funding opportunities. I think EA generally does well to make sure that the effective charities it supports don’t die off.
On “*Is it possible that some in EA over-fetishize measurement?”
I think this is somewhat true, but mainly for the causes of global health and development, and to a lesser extent animal welfare. EA is willing to be a lot more qualitative and speculative for longtermist projects.
On What’s the ideal ratio of analysis to action? When should people be held accountable to some form of action?
This is a good point to raise. The EA community does incentivize good debate and engagement on the forum, and consuming resources or joining discussion groups, and that could lead to less actual important work being done. But I’d like to think most people in EA are spending an acceptable amount of time on debating/argumentation, and are able to improve both their own and the community’s worldviews and decisions through this. And there are EAs who are more action-oriented, i.e. the founders who go through Charity Entrepreneurship’s incubation program!
Thanks for the question, Brian. I’m a big fan of the effective altruism movement and have tracked it for some time. That said, I am by no means an expert, so my answers are those of a casual observer. Caveat lector!
THINGS I LIKE:
*Keeping it simple: I like forums where people can stress-test their ideas, assumptions, and arguments in the service of pursing good. The more rational, evidence-based decision makers we have, the better off the world will be, whether in non-profit work or any other field. EA provides concepts and tools, as well as a community within which to test them. Last but not least, the EA movement encourages people to think deeply about their impact in and on the world. This is a wonderful thing.
THINGS I HAVE PONDERED:
*Does the EA community tend to overemphasize philanthropy? If so, why? If you look at the etymology of phil-anthropy, it is literally “man-loving.” Many of the causes favored in the EA community seem to focus on the well-being of humans (and animals). While I strongly support causes that focus on human well-being—psychedelic science is certainly an example -- I simultaneously believe that there are many worthwhile causes with measurable benefits that don’t have humans as the sole or primary beneficiary. This is why I always refer to my non-profit work as “non-profit work” and never as philanthropy. I dislike humankind a good portion of every week. We’ve made a fantastic mess of things.
*Do some in EA inadvertently select non-profits that are the least likely to survive? This could be a total misread, but I have come across a few passages like the below from FoundersPledge, and bolding is mine. I should note that I agree with much of their other writing:
Our research conclusions do not imply that one nonprofit does more important work than another, or that a particular cause is more worthy of support than another. They instead reflect our overall view of which funding opportunities at nonprofits could currently use extra funds most effectively.
This is because we aim to recommend to our members funding opportunities with a maximum counterfactual impact. That is, our goal is to recommend opportunities where extra funding by our members would make the largest difference compared to if they provided no extra funding. Paradoxically, this implies that if a nonprofit does high-impact work but is in addition very successful at raising funds for that work, we should not recommend any funding opportunities at that nonprofit.
###
In the for-profit startup world, if you invest in the seed round of a company and the startup can’t raise a subsequent Series A, they are toast and the value of your investment goes to zero. No one passes Go.
In my experience, cash flow and donations are also the lifeblood of 99.9% of non-profits. I’ve seen multiple non-profit projects fail because they were ineffective at raising funds. It’s fundamental.
In some cases, it’s not a total red flag. Example: difficulty in raising funds is due to systemic or technical issues (e.g., a scientist is asked by the administration to raise his/her own funds for a complex scientific study within a small university department). If, on the other hand, a non-profit has someone in “development” (fundraising) and they still can’t raise funds, they are simply bad at raising money.
For me, an inability to raise funds effectively would be a disqualifier, not a qualifier. It doesn’t matter how good someone’s project is, if it requires two years to reach fruition and they run out of cash six months after taking your donation, it’s a failure IMHO. Furthermore, it’s been my experience that the non-profits that are worst at fundraising are never the most effective at using funds.
*Is it possible that some in EA over-fetishize measurement? I am all for quantification, but I think there are some risks of worshipping the gods of many decimal points, or immediately scoffing at variables that are harder to peg numbers onto. A few thoughts:
Probabilistic thinking and making predictions/bets with incomplete information is an incredible power worth developing. Think poker or blackjack versus chess. I’m not sure this gets enough attention within the EA community, and 2020 plus COVID certainly showed us that the majority of humans fail in this department. I haven’t read it, but I’ve heard good things about Annie Duke’s book, Thinking in Bets: Making Smarter Decisions When You Don’t Have All the Facts.
It’s easy to fake important with precision. For example, let’s say someone hands out 100 surveys to strangers to assess anxiety levels before and after eating free frozen yogurt. There are a few possible design problems already, but their abstract later includes something like “Administration of the frygrt270 intervention [frozen yogurt] decreased mean self-reported anxiety by 23.742% with a standard deviation of...” Never mind that people weren’t given guidance on how to rate themselves, they got more free yogurt if they showed “good results,” etc. It’s easy to dress up sloppiness with numbers, even if the math checks out. Well quantified does not mean well reasoned or well done.
What’s the ideal ratio of analysis to action? When should people be held accountable to some form of action? There are always more spectators than people on the field, and that’s OK. But I think it’s worth being aware of incentives that might keep people off of the field. In the EA community, people appear to be rewarded for well-worded debate and argument. There is social reinforcement when one engages well with the community. Could it be that overly engaging makes one less likely to be an effective altruist in the real world? For example, is the guy constantly racking up karma online within EA really being more “effective” than the woman who just helps one old lady across the street per day? Put another way, is mildly—or massively—ineffective altruism in practice better than effective altruism in theory? The word play and semantic jousting is fun, but it’s the real-world results that matter at the end of the day. I think it’s worth asking (and perhaps it’s been asked and figured out already!) how the community can hold members accountable in some fashion, to ensure that those talking about effective altruism are actually putting skin the game in the wider world. That would really do a lot of good.
Thanks for the thoughtful answers! Seems like you’ve pondered quite a bit on EA. Here are my comments and reactions, if you or others would like to read them:
On “*Does the EA community tend to overemphasize philanthropy? If so, why?...”
That’s the first time I heard of the etymology for philanthropy. Anyway, I think what you meant here is that you think the EA community overemphasizes working on causes that mainly help humans and farm animals, at the expense of other causes that help other beneficiaries, i.e. wild animals or the environment.
To some extent you are right, but maybe you’re not aware that some people and organizations in the EA community are also doing important work for wild animals and the environment. There are two EA-aligned organizations working on wild animal welfare, which are Animal Ethics and Wild Animal Initative.
Wild Animal Initiative became a top charity of Animal Charity Evaluators last year, and they focus on helping scientists, grantmakers, and decision-makers investigate important and understudied questions about wild animal welfare. You might be interested to read their research or donate to them. They wrote this article on trophic interactions, which you might be interested in given that you mentioned trophic cascades in a separate answer.
For the environment, the EA-aligned organization Founders Pledge has done research into what are the highest-impact funding opportunities for climate change here. I’m not an expert here, but it’s quite possible that these organizations may have a larger positive effect long-term for biodiversity and preventing further environmental damage than the Amazon Conservation Team, which you support.
On “*Do some in EA inadvertently select non-profits that are the least likely to survive?”
When you said “For me, an inability to raise funds effectively would be a disqualifier, not a qualifier”, I think it’s quite possible that the most evidence-based and effective charities are not the ones who can raise the funds most effectively to fill all of their funding gaps. Many non-EA recommended charities use disingenuous methods or work on heart-tugging causes to get funding, while EA recommended charities like GiveWell’s top charities work on less popular causes and do less disingenuous marketing. So in a way, I could see how EA does select non-profits that are less likely to survive, but that’s also a sign that EA is donating to the best funding opportunities. I think EA generally does well to make sure that the effective charities it supports don’t die off.
On “*Is it possible that some in EA over-fetishize measurement?”
I think this is somewhat true, but mainly for the causes of global health and development, and to a lesser extent animal welfare. EA is willing to be a lot more qualitative and speculative for longtermist projects.
On What’s the ideal ratio of analysis to action? When should people be held accountable to some form of action?
This is a good point to raise. The EA community does incentivize good debate and engagement on the forum, and consuming resources or joining discussion groups, and that could lead to less actual important work being done. But I’d like to think most people in EA are spending an acceptable amount of time on debating/argumentation, and are able to improve both their own and the community’s worldviews and decisions through this. And there are EAs who are more action-oriented, i.e. the founders who go through Charity Entrepreneurship’s incubation program!