I’ve engaged with a few activist and political communities in the past, primarily around environmental issues and Green politics. My overall take is that I would find it hard today to be part of these communities compared to the ones that interest me today. From what I remember, epistemic practices tended to be very bad, with lots of motivated reasoning, cherry-picking, various biases, etc. It doesn’t necessarily mean the people I met were wrong, but how they made up their minds about issues seems very flawed in retrospect. Compared to this, the epistemic quality of Effective Altruism appears to be its main competitive advantage compared to other communities I encountered. Many people in the community are genuinely cause-neutral and truly adopt (or at least try to adopt) a scout mindset.
If anything seems better about these communities, it’s the fact that their direct engagement with politics, the media, etc., makes them much more aware of the importance of public relations and not being perceived as bad actors. My perception – reinforced by everything that happened in EA in late 2022 – is that many EAs see public relations as unnecessary (sometimes even bad, when “PR” is used as a derogatory term). I’ve met quite a few people who seem to think that the way non-EA people perceive EA doesn’t matter at all, as long as EA people are saying things that are evidence-based and smart. I believe this is deeply wrong; a community of smart and “very-right” people won’t have much impact if it has such a bad image that no one dares involve it in public discussions.
Interestingly, in the case of EA, this dismissive attitude toward image sometimes applies to individuals as well. Both online and at EAG, I’ve met more people than I expected who seemed to disregard the benefits of social norms, politeness, kindness, etc., and who behaved in a way that seemed to say “I’m too smart to be slowed down by these stupid things”. (To be clear, I don’t think the majority of EAs are like this at all; but the prevalence of this behavior seems much higher than in the general population.)
Another thing that comes to mind, valued by people outside EA but shrugged off by people inside EA, is institutional stability. From having worked or collaborated with quite a few different companies, political parties, research organizations, NGOs, etc., I think there is genuine value in building institutions on solid foundations. For EA organizations, this relates to many questions people have raised since the FTX debacle: who should run EA organizations? What should their boards look like? What share of board members should be EAs? What share of board members can overlap between very close EA organizations? I think many EAs have shrugged off these questions as boring, but the long-term stability of the overall EA community depends on them.
Funding runway also falls under that category: many EAs reason about funding stability as if every skilled person was happy to work at an organization that could run out of money in less than a year. Again, I don’t think this is a good way of planning things out for the long term. This recent post that described NTI as “too rich” for holding more than 1.5 years’ expenditure, is one example of this bad habit.
Could you expand on your last point? As I am not sure I understood it properly.
many EAs reason about funding stability as if every skilled person was happy to work at an organization that could run out of money in less than a year
I would agree that having charities with long term funding and stability is great. At the same time I feel that if a charity is provably effective then it will keep existing even if it has less than a year of funding because they shouldn’t have issues with asking for more funding.
Therefore, if you keep the funding under a year, the charities that work will continue working, those who are not as promising will dissolve. What would be the solution then? If you provide 3 years of funding to the effective charities, I assume nothing would change because those charities wouldn’t have issues with getting the funding. If you give 3 years of funding to an inefficient charity, do they have just 3 years to waste, or do they return the money?
I agree that things could work like this in theory, but I see two significant issues with how you describe it.
First, the process isn’t as simple as “charities are created; the ones proven effective easily and regularly get money; the ineffective ones run out of money and disappear”. That resembles the perfect competition model in economics: something handy to reason about the world, but that simplifies reality to the point of hiding many complexities. In reality, many ineffective charities survive for decades, while promising ones sometimes struggle to find the funding they need. These imperfections are one of the very reasons why effective altruism was first conceptualized.
Second, even if this ideal model was true, equally-skilled people still respond differently to risk. For example, in practice, there’s a significant difference between being able to say to a potential hire:
“Right now, we only have money to pay our staff for less than a year, but our charity is provably effective, so there’s nothing to worry about.”
“We have 2-3 years of financial runway. Beyond that, we’re confident we’ll find more money, though we can’t have 100% uncertainty.”
It’s a recurrent bias within EA to not see much difference between these two statements. EA people tend to be more tolerant to risk in their career decisions, and okay with making big bets that don’t always pay out. They also tend to be relatively young and without kids.
But once an organization grows in size, impact, and ambition, it can’t rely forever on risk-tolerant twenty-something EAs. It needs more experienced and senior people to join. And with more experience often come various financial commitments (e.g., mortgage, kids); that’s where financial stability can make a big difference.
Hey Ollie – thanks for the question!
I’ve engaged with a few activist and political communities in the past, primarily around environmental issues and Green politics. My overall take is that I would find it hard today to be part of these communities compared to the ones that interest me today. From what I remember, epistemic practices tended to be very bad, with lots of motivated reasoning, cherry-picking, various biases, etc. It doesn’t necessarily mean the people I met were wrong, but how they made up their minds about issues seems very flawed in retrospect. Compared to this, the epistemic quality of Effective Altruism appears to be its main competitive advantage compared to other communities I encountered. Many people in the community are genuinely cause-neutral and truly adopt (or at least try to adopt) a scout mindset.
If anything seems better about these communities, it’s the fact that their direct engagement with politics, the media, etc., makes them much more aware of the importance of public relations and not being perceived as bad actors. My perception – reinforced by everything that happened in EA in late 2022 – is that many EAs see public relations as unnecessary (sometimes even bad, when “PR” is used as a derogatory term). I’ve met quite a few people who seem to think that the way non-EA people perceive EA doesn’t matter at all, as long as EA people are saying things that are evidence-based and smart. I believe this is deeply wrong; a community of smart and “very-right” people won’t have much impact if it has such a bad image that no one dares involve it in public discussions.
Interestingly, in the case of EA, this dismissive attitude toward image sometimes applies to individuals as well. Both online and at EAG, I’ve met more people than I expected who seemed to disregard the benefits of social norms, politeness, kindness, etc., and who behaved in a way that seemed to say “I’m too smart to be slowed down by these stupid things”. (To be clear, I don’t think the majority of EAs are like this at all; but the prevalence of this behavior seems much higher than in the general population.)
Another thing that comes to mind, valued by people outside EA but shrugged off by people inside EA, is institutional stability. From having worked or collaborated with quite a few different companies, political parties, research organizations, NGOs, etc., I think there is genuine value in building institutions on solid foundations. For EA organizations, this relates to many questions people have raised since the FTX debacle: who should run EA organizations? What should their boards look like? What share of board members should be EAs? What share of board members can overlap between very close EA organizations? I think many EAs have shrugged off these questions as boring, but the long-term stability of the overall EA community depends on them.
Funding runway also falls under that category: many EAs reason about funding stability as if every skilled person was happy to work at an organization that could run out of money in less than a year. Again, I don’t think this is a good way of planning things out for the long term. This recent post that described NTI as “too rich” for holding more than 1.5 years’ expenditure, is one example of this bad habit.
Thanks for responding!
Could you expand on your last point? As I am not sure I understood it properly.
I would agree that having charities with long term funding and stability is great. At the same time I feel that if a charity is provably effective then it will keep existing even if it has less than a year of funding because they shouldn’t have issues with asking for more funding.
Therefore, if you keep the funding under a year, the charities that work will continue working, those who are not as promising will dissolve. What would be the solution then? If you provide 3 years of funding to the effective charities, I assume nothing would change because those charities wouldn’t have issues with getting the funding. If you give 3 years of funding to an inefficient charity, do they have just 3 years to waste, or do they return the money?
I agree that things could work like this in theory, but I see two significant issues with how you describe it.
First, the process isn’t as simple as “charities are created; the ones proven effective easily and regularly get money; the ineffective ones run out of money and disappear”. That resembles the perfect competition model in economics: something handy to reason about the world, but that simplifies reality to the point of hiding many complexities. In reality, many ineffective charities survive for decades, while promising ones sometimes struggle to find the funding they need. These imperfections are one of the very reasons why effective altruism was first conceptualized.
Second, even if this ideal model was true, equally-skilled people still respond differently to risk. For example, in practice, there’s a significant difference between being able to say to a potential hire:
“Right now, we only have money to pay our staff for less than a year, but our charity is provably effective, so there’s nothing to worry about.”
“We have 2-3 years of financial runway. Beyond that, we’re confident we’ll find more money, though we can’t have 100% uncertainty.”
It’s a recurrent bias within EA to not see much difference between these two statements. EA people tend to be more tolerant to risk in their career decisions, and okay with making big bets that don’t always pay out. They also tend to be relatively young and without kids.
But once an organization grows in size, impact, and ambition, it can’t rely forever on risk-tolerant twenty-something EAs. It needs more experienced and senior people to join. And with more experience often come various financial commitments (e.g., mortgage, kids); that’s where financial stability can make a big difference.