I’ve spent time in the non-EA nonprofit sector, and the “standard critical story” there is one of suppressed anger among the workers. To be clear, this “standard critical story” is not always fair, accurate, or applicable. By and large, I also think that, when it is applicable, most of the people involved are not deliberately trying to play into this dynamic. It’s just that, when people are making criticisms, this is often the story I’ve heard them tell, or seen for myself.
It goes something like this:
[Non-EA] charities are also primarily funded by millionaires and billionaires. But they’re also run by independently wealthy people, who do it for the PR or for the fuzzies. They underpay, overwork, and ignore the ideas of their staff. They’re burnout factories.
Any attempts to “measure the impact” of the charity are subverted by carelessness and the undirected dance of incentives to improve the optics of their organization to keep the donations flowing. Lots of attention on gaming the stats, managing appearances, and sweeping failures under the rug.
Missions are thematic, and there’s lots of “we believe in the power of...”-type storytelling motivating the work. Sometimes, the storytelling is explicitely labeled as such, serving to justify charities that are secular, yet ultimately faith-based.
Part of the core EA thesis is that we want to have a different relationship with money and labor: pay for impact, avoid burnout, money is good, measure what you’re doing, trust the argument and evidence rather than the optics. I expect that anybody reading this comment is very familiar with this thesis.
It’s not a thesis that’s optimized for optics or for warm fuzzies. So it should not be surprising that it’s easy to make it look bad, or that it provokes anxiety.
This is unfortunate, though, because appearances and bad feelings are heuristics we rely on to avoid getting sucked into bad situations.
Personally, I think the best way to respond to such anxieties is to spend the energy you put into worrying to critically, carefully investigate some aspect of EA . We make constant calls for criticism, either from inside or outside. Part of the culture of the movement involves an unusual level of transparency, and responsiveness to the argument rather than the status of the arguer.
One way to approach this would simply be to make a hypothesis (i.e. the bar for grants is being lowered, we’re throwing money at nonsense grants), and then see what evidence you can gather for and against it.
Another way would be to identify a hypothesis for which it’s hard to gather evidence either way. For example, let’s say you’re worried that an EA org is run by a bunch of friends who use their billionaire grant money to pay each other excessive salaries and and sponsor Bahama-based “working” vacations. What sort of information would you need in order to support this to the point of being able to motivate action, or falsify it to the point of being able to dissolve your anxiety? If that information isn’t available, then why not? Could it be made available? Identifying a concrete way in which EA could be more transparent about its use of money seems like an excellent, constructive research project.
One way to approach this would simply be to make a hypothesis (i.e. the bar for grants is being lowered, we’re throwing money at nonsense grants), and then see what evidence you can gather for and against it.
Another way would be to identify a hypothesis for which it’s hard to gather evidence either way. For example, let’s say you’re worried that an EA org is run by a bunch of friends who use their billionaire grant money to pay each other excessive salaries and and sponsor Bahama-based “working” vacations. What sort of information would you need in order to support this to the point of being able to motivate action, or falsify it to the point of being able to dissolve your anxiety? If that information isn’t available, then why not? Could it be made available? Identifying a concrete way in which EA could be more transparent about its use of money seems like an excellent, constructive research project.
Overall I like your post and think there’s something to be said for reminding people that they have power; and in this case, the power is to probe at the sources of their anxiety and reveal ground-truth. But there is something unrealistic, I think, about placing the burden on the individual with such anxiety; particularly because answering questions about whether Funder X is lowering / raising the bar too much requires in-depth insider knowledge which—understandably—people working for Funder X might not want to reveal for a number of reasons, such as:
they’re too busy, and just want to get on with grant-making
with distributed responsibility for making grants in an organisation, there will be a distribution of happiness across staff with the process, and airing such tensions in public can be awkward and uncomfortable
they’ve done a lot of the internal auditing / assessment they thought was proportional
they’re seeing this work as inherently experimental / learning-by-doing and therefore plan more post-hoc reviews the prior process crafting
I’m also just a bit averse, from experience, of replying to people’s anxieties with “solve it yourself”. I was on a graduate scheme where pretty much every response to an issue raised—often really systemic, challenging issues which people haven’t been able to solve for years, or could be close to whistle-blowing issues—was pretty much “well how can you tackle this?”* The takeaway mesage then feels something like “I’m a failure if I can’t see the way out of this, even if this is really hard, because this smart more experienced person has told me it’s on me”. But lots of these systemic issues do not have an easy solution, or taking steps towards action are either emotionally / intellectually hard or frankly could be personally costly.
From experience, this kind of response can be empowering, but it can also inculcate a feeling of desperation when clever and can-do attitude people (like most EAs) are advised to solve something without support or guidance, especially when this is near intractable. I’m not saying this is what the response of ‘research it yourself’ is—in fact, you very much gave guidance—but I think the response was not sufficiently mindful of the barriers to doing this. Specifically, I think it would be really difficult for a small group of capable people to research this a priori, unless there were other inputs and support like e.g. significant cooperation from Funder X they’re looking to scrutinise, or advice from other people / orgs who’ve done this work. Sometimes that is available, but it isn’t always and I’d argue it’s kind of a condition for success / not getting burned out trying to get answers on the issue that’s been worrying you.
Side-note: I’ve deliberately tried to make this commentary funder neutral because I’m not sure how helpful the focus on FTx is. In fairness to them, they may be planning to publish their processes / invite critique (or have done so in private?), or are planning to take forward rigorous evaluation of their grants like GiveWell did? So would rather frame this as an invitation to comment if they haven’t already, because it felt like the assumptions throughout this thread are “they ain’t doing zilch about this” which might not be the case.
*EDIT: In fact, sometimes a more appropriate response would have been “yes, this is a really big challenge you’ve encountered and I’m sorry you feel so hopeless over it—but the feeling reflects the magnitude of the challenge”. I wonder if that’s something relevant to the EA community as well; that aspects of moral uncertainty / uncertainty about whether what we’re doing is impactful or not is just tough, and it’s ok to sit with that feeling.
Note that patient philanthropy includes investing in resources besides money that will allow us to do more good later; e.g. the linked article lists “global priorities research” and “Building a long-lasting and steadily growing movement” as promising opportunities from a patient longtermist view.
Looking at the Future Fund’s Areas of Interest, at least 5 of the 10 strike me as promising under patient philanthropy: “Epistemic Institutions”, “Values and Reflective Processes”, “Empowering Exceptional People”, “Effective Altruism”, and “Research That Can Help Us Improve”
The fellowship’s $50k to 100 fellows, a total of $5.5mil.
2. The money’s not described by AF as “no strings attached.” From their FAQ:
Scholarship money should be treated as “professional development funding” for award winners. This means the funds could be spent on things like professional travel, textbooks, technology, college tuition, supplementing unpaid internships, and more.
Students will receive ongoing guidance to manage and effectively spend their scholarship funds.
For Fellows ($50,000), a (taxed) amount is placed in a trust fund managed by the Atlas Fellowship team. Once the student turns 18, they have two options:
1.Submit an award disbursement request every year, indicating the amount of scholarship the student would like to withdraw for what purposes. Post-undergrad, the remainder of the funds are sent to the student. This helps avoid scholarship displacement.
2. Receive the scholarship funds as a lump-sum payment sent directly to the student.
You’re wanting transparency about this fellowship’s strategy for attracting applicants, and how it’ll get objective information on whether or not this is an effective use of funds.
Glancing over the FAQ, the AF seems to be trying to identify high-school-age altruistically-minded geniuses, and introduce them to the ideas of Effective Altruism. I can construct an argument in favor of this being a good-but-hard-to-measure idea in line with the concept of hits-based investment, but I don’t see one on their web page. In this case, I think the feedback loop for evaluating if this was a good idea or not involves looking at how these young people spend their money, and what they accomplish over the next 10 years.
It also seems to me that if, as you say, we’re still funding constrained in an important way, then experimenting with ways to increase our donor base (as by sharing our ideas with smart young people likely to be high earners) and make more efficient use of our funds (by experimenting with low-cost and potentially very impactful hits-based approaches like this) is the right choice.
I could easily be persuaded that this program or general approach is too flawed, but I’d want to see a careful analysis that looks at both sides of the issue.
Thanks for the corrections, fixed. I agree that the hits-based justification could work out, just would like to see more public analysis of this and other FTX initiatives.
I’ve spent time in the non-EA nonprofit sector, and the “standard critical story” there is one of suppressed anger among the workers. To be clear, this “standard critical story” is not always fair, accurate, or applicable. By and large, I also think that, when it is applicable, most of the people involved are not deliberately trying to play into this dynamic. It’s just that, when people are making criticisms, this is often the story I’ve heard them tell, or seen for myself.
It goes something like this:
Part of the core EA thesis is that we want to have a different relationship with money and labor: pay for impact, avoid burnout, money is good, measure what you’re doing, trust the argument and evidence rather than the optics. I expect that anybody reading this comment is very familiar with this thesis.
It’s not a thesis that’s optimized for optics or for warm fuzzies. So it should not be surprising that it’s easy to make it look bad, or that it provokes anxiety.
This is unfortunate, though, because appearances and bad feelings are heuristics we rely on to avoid getting sucked into bad situations.
Personally, I think the best way to respond to such anxieties is to spend the energy you put into worrying to critically, carefully investigate some aspect of EA . We make constant calls for criticism, either from inside or outside. Part of the culture of the movement involves an unusual level of transparency, and responsiveness to the argument rather than the status of the arguer.
One way to approach this would simply be to make a hypothesis (i.e. the bar for grants is being lowered, we’re throwing money at nonsense grants), and then see what evidence you can gather for and against it.
Another way would be to identify a hypothesis for which it’s hard to gather evidence either way. For example, let’s say you’re worried that an EA org is run by a bunch of friends who use their billionaire grant money to pay each other excessive salaries and and sponsor Bahama-based “working” vacations. What sort of information would you need in order to support this to the point of being able to motivate action, or falsify it to the point of being able to dissolve your anxiety? If that information isn’t available, then why not? Could it be made available? Identifying a concrete way in which EA could be more transparent about its use of money seems like an excellent, constructive research project.
Overall I like your post and think there’s something to be said for reminding people that they have power; and in this case, the power is to probe at the sources of their anxiety and reveal ground-truth. But there is something unrealistic, I think, about placing the burden on the individual with such anxiety; particularly because answering questions about whether Funder X is lowering / raising the bar too much requires in-depth insider knowledge which—understandably—people working for Funder X might not want to reveal for a number of reasons, such as:
they’re too busy, and just want to get on with grant-making
with distributed responsibility for making grants in an organisation, there will be a distribution of happiness across staff with the process, and airing such tensions in public can be awkward and uncomfortable
they’ve done a lot of the internal auditing / assessment they thought was proportional
they’re seeing this work as inherently experimental / learning-by-doing and therefore plan more post-hoc reviews the prior process crafting
I’m also just a bit averse, from experience, of replying to people’s anxieties with “solve it yourself”. I was on a graduate scheme where pretty much every response to an issue raised—often really systemic, challenging issues which people haven’t been able to solve for years, or could be close to whistle-blowing issues—was pretty much “well how can you tackle this?”* The takeaway mesage then feels something like “I’m a failure if I can’t see the way out of this, even if this is really hard, because this smart more experienced person has told me it’s on me”. But lots of these systemic issues do not have an easy solution, or taking steps towards action are either emotionally / intellectually hard or frankly could be personally costly.
From experience, this kind of response can be empowering, but it can also inculcate a feeling of desperation when clever and can-do attitude people (like most EAs) are advised to solve something without support or guidance, especially when this is near intractable. I’m not saying this is what the response of ‘research it yourself’ is—in fact, you very much gave guidance—but I think the response was not sufficiently mindful of the barriers to doing this. Specifically, I think it would be really difficult for a small group of capable people to research this a priori, unless there were other inputs and support like e.g. significant cooperation from Funder X they’re looking to scrutinise, or advice from other people / orgs who’ve done this work. Sometimes that is available, but it isn’t always and I’d argue it’s kind of a condition for success / not getting burned out trying to get answers on the issue that’s been worrying you.
Side-note: I’ve deliberately tried to make this commentary funder neutral because I’m not sure how helpful the focus on FTx is. In fairness to them, they may be planning to publish their processes / invite critique (or have done so in private?), or are planning to take forward rigorous evaluation of their grants like GiveWell did? So would rather frame this as an invitation to comment if they haven’t already, because it felt like the assumptions throughout this thread are “they ain’t doing zilch about this” which might not be the case.
*EDIT: In fact, sometimes a more appropriate response would have been “yes, this is a really big challenge you’ve encountered and I’m sorry you feel so hopeless over it—but the feeling reflects the magnitude of the challenge”. I wonder if that’s something relevant to the EA community as well; that aspects of moral uncertainty / uncertainty about whether what we’re doing is impactful or not is just tough, and it’s ok to sit with that feeling.
Moved this comment to a shortform post here.
Note that patient philanthropy includes investing in resources besides money that will allow us to do more good later; e.g. the linked article lists “global priorities research” and “Building a long-lasting and steadily growing movement” as promising opportunities from a patient longtermist view.
Looking at the Future Fund’s Areas of Interest, at least 5 of the 10 strike me as promising under patient philanthropy: “Epistemic Institutions”, “Values and Reflective Processes”, “Empowering Exceptional People”, “Effective Altruism”, and “Research That Can Help Us Improve”
Two factual nitpicks:
The fellowship’s $50k to 100 fellows, a total of $5.5mil.
2. The money’s not described by AF as “no strings attached.” From their FAQ:
You’re wanting transparency about this fellowship’s strategy for attracting applicants, and how it’ll get objective information on whether or not this is an effective use of funds.
Glancing over the FAQ, the AF seems to be trying to identify high-school-age altruistically-minded geniuses, and introduce them to the ideas of Effective Altruism. I can construct an argument in favor of this being a good-but-hard-to-measure idea in line with the concept of hits-based investment, but I don’t see one on their web page. In this case, I think the feedback loop for evaluating if this was a good idea or not involves looking at how these young people spend their money, and what they accomplish over the next 10 years.
It also seems to me that if, as you say, we’re still funding constrained in an important way, then experimenting with ways to increase our donor base (as by sharing our ideas with smart young people likely to be high earners) and make more efficient use of our funds (by experimenting with low-cost and potentially very impactful hits-based approaches like this) is the right choice.
I could easily be persuaded that this program or general approach is too flawed, but I’d want to see a careful analysis that looks at both sides of the issue.
Thanks for the corrections, fixed. I agree that the hits-based justification could work out, just would like to see more public analysis of this and other FTX initiatives.
FYI, this just links to this same Forum post for me.
Thanks, fixed.