Thanks for sharing this potential shift Oli. If the fund split, would the same managers be in charge of both funds or would you add a new management team? Also, would you mind giving a couple of examples of grants that you’d consider “medium-risk”? And do you see these grants as comparably risky to the “medium risk” grants made by the other funds, or just less risky than other grants made by the LTFF?
My sense is that the other funds are making “medium-risk” grants that have substantially simpler paths to impact. Using the Health Fund’s grant to Fortify Health as an example, the big questions are whether FH can get the appropriate nutrients into food and then get people to consume that food, as there’s already strong evidence micronutrient fortification works. By contrast, I’d argue that the LTFF’s mandate comes with a higher baseline level of risk since “it is very difficult to know whether actions taken now are actually likely to improve the long-term future.” (Of course, that higher level of risk might be warranted; I’m not making any claims about the relative expected values of grants made by different funds).
Note that I don’t currently feel super comfortable with the “risk” language in the context of altruistic endeavors, and think that it conjures up a bunch of confusing associations with financial risk (where you usually have an underlying assumption that you are financially risk averse, which usually doesn’t apply for altruistic efforts). So I am not fully sure whether I can answer your question as asked.
I actually think a major concern that is generating a lot of the discussion around this is much less “high variance of impact” and more something like “risk of abuse”.
In particular, I think relatively few people would object if the funds were doing the equivalent of participating in the donor lottery, even though that would very straightforwardly increase the variance of our impact.
Instead, I think the key difference between a lot of grants that the LTFF made and that were perceived as “risky” and the grants of most other funds (and the grants we made that were perceived as “less risky”) is that the risky grants were perceived as harder to judge from the outside of the fund, and were given to people to whom we have a closer personal connection to, both of which enable potential abuse by the fund managers by funneling funds to themselves and their personal connections.
I think people are justified in being concerned about risk of abuse, and I also think that people generally have particularly high standards for altruistic contributions not being funneled into self-enriching activities.
One observation I made that I think illustrates this pretty well is the response to last rounds grant for improving reproducibility in science. I consider that grant to be one of the riskiest (in the “variance of impact sense”) that we ever made, with its effects being highly indirect, many steps removed from the long-term future and global catastrophic risk, and its effects only really being relevant in a somewhat smaller fraction of worlds where the reproducibility of cognitive science will become relevant to global catastrophic risks.
However, despite that, almost anyone I’ve talked to classified that grant as one of the least “risky” grants we made. I think this is because while the path to impact of the grant was long and indirect, the reasoning behind it was broadly available, and the information necessary to make the judgement was fully in-public. There was common-knowledge of grants of that type being a plausible path to impact, and there was no obvious way in which that grant would benefit us as the grantmakers.
Now, in this new frame, let me answer your original questions:
At least from what I know the management team would stay the same for both funds
I think in the frame of “risk of abuse”, I consider the grant to reproducing science to be a “medium-risk” bet. I would also consider our grant to Ought and MIRI as “medium-risk” bets. I would classify many of our grants to individuals as high-risk bets.
I think those “medium-risk” grants are indeed comparable in risk of abuse to the average grant of the meta-fund, which I think has generally exercised their individual judgement less, and have more deferred to a broad consensus on which things are positive impact (which I do think has resulted in a lot of value left on the table)
All of this said, I am not yet really sure whether the “risk of abuse” framing really accurately captures people’s feelings here, and whether that’s the appropriate frame from which to look at things through.
I do think that at the current-margin, using only granting procedures that have minimal risk of abuse is leaving a lot of value at the table, because I think evaluating individual people and their competencies, as well as using local expertize and hard-to-communicate experience, is a crucial component of good grant-making.
I do think we can build better incentive systems and accountability systems to lower the risk of abuse. Reducing the risk of abuse is one of the reasons why I’ve been investing so much effort into producing comprehensive and transparent grant writeups, since that exposes our reasoning more to the public, and allows people to cross-check and validate the reasoning for our grants, as well as call us out if they think our reasoning is spotty for specific grants. I think this is one way of reducing the risk of abuse, allowing us to overall make grants that take more advantage of our individual judgement, and being more effective on-net.
Very helpful response! This (like much of the other detailed transparency you’ve provided) really helped me understand how you think about your grantmaking (strong upvote), though I wasn’t actually thinking about “risk of abuse” in my question.
I’d been thinking of “risk” in the sense that the EA Funds materials on the topic use the term: “The risk that a grant will have little or no impact.” I think this is basically the kind of risk that most donors will be most concerned about, and is generally a pretty intuitive framing. And while I’m open to counterarguments, my impression is that the LTFF’s grants are riskier in this sense than grants made by the other funds because they have longer and less direct paths to impact.
I think “risk of abuse” is an important thing to consider, but not something worth highlighting to donors through a prominent section of the fund pages. I’d guess that most donors assume that EA Funds is run in a way that “risk of abuse” is quite low, and that prospective donors would be turned off by lots of content suggesting otherwise. Also, I’m not sure “risk of abuse” is the right term. I’ve argued that some parts of EA grantmaking are too dependent on relationships and networks, but I’m much more concerned about unintentional biases than the kind of overt (and unwarranted) favoritism that “risk of abuse” implies. Maybe “risk of bias”?
I’d been thinking of “risk” in the sense that the EA Funds materials on the topic use the term: “The risk that a grant will have little or no impact.” I think this is basically the kind of risk that most donors will be most concerned about, and is generally a pretty intuitive framing.
To be clear, I am claiming that the section you are linking is not very predictive of how I expect CEA to classify our grants, and is not very predictive of the attitudes that I have seen from CEA and other stakeholders and donors of the funds, in terms of whether they will have an intuitive sense that a grant is “risky”. Indeed, I think that page is kind of misleading and think we should probably rewrite it.
I am concretely claiming that both CEA’s attitudes, the attitudes of various stakeholders, and most donors attitudes is better predicted by the “risk of abuse” framing I have outlined. In that sense, I disagree with you that most donors will be primarily concerned about the kind of risk that is discussed on the EA Funds page.
Obviously, I do still think there is a place for considering something more like “variance of impact”, but I don’t actually think that that dimension has played a large role in people’s historical reactions to grants we have made, and I don’t expect it to matter too much in the future. I think in terms of impact, most people I have interacted with tend to be relatively risk-neutral when it comes to their altruistic impact (and I don’t know of any good arguments for why someone should be risk-averse in their altruistic activities, since the case for diminishing marginal returns at the scales on which our grants tend to influence things seems pretty weak).
Edit: To give a more concrete example here, I think by far the grant that has been classified as the “riskiest” grant we have made, that from what I can tell has been motivating a lot of the split into “high risk” and “medium risk” grants, is our grant to Lauren Lee. That grant does not strike me as having a large downside risk, and I don’t think anyone I’ve talked to has suggested that this is the case. The risk that people have talked about is the risk of abuse that I have been talking about, and associated public relations risks, and many have critiqued the grant as “the Long Term Future Fund giving money to their friends”, which highlights to me the dimension of abuse risk much more concretely than the dimension of high variance.
In addition to that, grants that operate a higher level of meta than other grants, i.e. which tend to facilitate recruitment, training or various forms of culture-development, have not been broadly described as “risky” to me, even though from a variance perspective those kinds of grants are almost always much higher variance than the object-level activities that they tend to support (since their success and failure is dependent on the success of the object-level activities). Which again strikes me as strong evidence that variance of impact (which seems to be the perspective that the EA Funds materials appear to take) is not a good predictor of how people classify the grants.
Obviously, I do still think there is a place for considering something more like “variance of impact”, but I don’t actually think that that dimension has played a large role in people’s historical reactions to grants we have made, and I don’t expect it to matter too much in the future.
Relatedly, I don’t recall anyone pointing out that funding a large number of ‘risky’ individuals, instead of a small number of ‘safe’ organisations, might be less risky (in the sense of lower variance), because the individual risks are largely independent, so you get a lot of portfolio diversification.
To be clear, I am claiming that the section you are linking is not very predictive of how I expect CEA to classify our grants, and is not very predictive of the attitudes that I have seen from CEA and other stakeholders and donors of the funds, in terms of whether they will have an intuitive sense that a grant is “risky”. Indeed, I think that page is kind of misleading and think we should probably rewrite it.
I am concretely claiming that both CEA’s attitudes, the attitudes of various stakeholders, and most donors attitudes is better predicted by the “risk of abuse” framing I have outlined. In that sense, I disagree with you that most donors will be primarily concerned about the kind of risk that is discussed on the EA Funds page.
If risk of abuse really is the big concern for most stakeholders, then I agree rewriting the risk page would make a lot of sense. Since that’s a fairly new page, I’d assumed it incorporated current thinking/feedback.
*nods* This perspective is currently still very new to me, and I’ve only briefly talked about it to people at CEA and other fund members. My sense was that people found the “risk of abuse” framing to resonate a good amount, but this perspective is definitely in no way consensus of the current fund-stakeholders, and is only the best way I can currently make sense of the constraints the fund is facing. I don’t know yet to what degree others will find this perspective compelling.
I don’t think anyone made a mistake by writing the current risk-page, which I think was an honest and good attempt at trying to explain a bunch of observations and perspectives. I just think I now have a better model that I would prefer to use instead.
Thanks for sharing this potential shift Oli. If the fund split, would the same managers be in charge of both funds or would you add a new management team? Also, would you mind giving a couple of examples of grants that you’d consider “medium-risk”? And do you see these grants as comparably risky to the “medium risk” grants made by the other funds, or just less risky than other grants made by the LTFF?
My sense is that the other funds are making “medium-risk” grants that have substantially simpler paths to impact. Using the Health Fund’s grant to Fortify Health as an example, the big questions are whether FH can get the appropriate nutrients into food and then get people to consume that food, as there’s already strong evidence micronutrient fortification works. By contrast, I’d argue that the LTFF’s mandate comes with a higher baseline level of risk since “it is very difficult to know whether actions taken now are actually likely to improve the long-term future.” (Of course, that higher level of risk might be warranted; I’m not making any claims about the relative expected values of grants made by different funds).
Note that I don’t currently feel super comfortable with the “risk” language in the context of altruistic endeavors, and think that it conjures up a bunch of confusing associations with financial risk (where you usually have an underlying assumption that you are financially risk averse, which usually doesn’t apply for altruistic efforts). So I am not fully sure whether I can answer your question as asked.
I actually think a major concern that is generating a lot of the discussion around this is much less “high variance of impact” and more something like “risk of abuse”.
In particular, I think relatively few people would object if the funds were doing the equivalent of participating in the donor lottery, even though that would very straightforwardly increase the variance of our impact.
Instead, I think the key difference between a lot of grants that the LTFF made and that were perceived as “risky” and the grants of most other funds (and the grants we made that were perceived as “less risky”) is that the risky grants were perceived as harder to judge from the outside of the fund, and were given to people to whom we have a closer personal connection to, both of which enable potential abuse by the fund managers by funneling funds to themselves and their personal connections.
I think people are justified in being concerned about risk of abuse, and I also think that people generally have particularly high standards for altruistic contributions not being funneled into self-enriching activities.
One observation I made that I think illustrates this pretty well is the response to last rounds grant for improving reproducibility in science. I consider that grant to be one of the riskiest (in the “variance of impact sense”) that we ever made, with its effects being highly indirect, many steps removed from the long-term future and global catastrophic risk, and its effects only really being relevant in a somewhat smaller fraction of worlds where the reproducibility of cognitive science will become relevant to global catastrophic risks.
However, despite that, almost anyone I’ve talked to classified that grant as one of the least “risky” grants we made. I think this is because while the path to impact of the grant was long and indirect, the reasoning behind it was broadly available, and the information necessary to make the judgement was fully in-public. There was common-knowledge of grants of that type being a plausible path to impact, and there was no obvious way in which that grant would benefit us as the grantmakers.
Now, in this new frame, let me answer your original questions:
At least from what I know the management team would stay the same for both funds
I think in the frame of “risk of abuse”, I consider the grant to reproducing science to be a “medium-risk” bet. I would also consider our grant to Ought and MIRI as “medium-risk” bets. I would classify many of our grants to individuals as high-risk bets.
I think those “medium-risk” grants are indeed comparable in risk of abuse to the average grant of the meta-fund, which I think has generally exercised their individual judgement less, and have more deferred to a broad consensus on which things are positive impact (which I do think has resulted in a lot of value left on the table)
All of this said, I am not yet really sure whether the “risk of abuse” framing really accurately captures people’s feelings here, and whether that’s the appropriate frame from which to look at things through.
I do think that at the current-margin, using only granting procedures that have minimal risk of abuse is leaving a lot of value at the table, because I think evaluating individual people and their competencies, as well as using local expertize and hard-to-communicate experience, is a crucial component of good grant-making.
I do think we can build better incentive systems and accountability systems to lower the risk of abuse. Reducing the risk of abuse is one of the reasons why I’ve been investing so much effort into producing comprehensive and transparent grant writeups, since that exposes our reasoning more to the public, and allows people to cross-check and validate the reasoning for our grants, as well as call us out if they think our reasoning is spotty for specific grants. I think this is one way of reducing the risk of abuse, allowing us to overall make grants that take more advantage of our individual judgement, and being more effective on-net.
Very helpful response! This (like much of the other detailed transparency you’ve provided) really helped me understand how you think about your grantmaking (strong upvote), though I wasn’t actually thinking about “risk of abuse” in my question.
I’d been thinking of “risk” in the sense that the EA Funds materials on the topic use the term: “The risk that a grant will have little or no impact.” I think this is basically the kind of risk that most donors will be most concerned about, and is generally a pretty intuitive framing. And while I’m open to counterarguments, my impression is that the LTFF’s grants are riskier in this sense than grants made by the other funds because they have longer and less direct paths to impact.
I think “risk of abuse” is an important thing to consider, but not something worth highlighting to donors through a prominent section of the fund pages. I’d guess that most donors assume that EA Funds is run in a way that “risk of abuse” is quite low, and that prospective donors would be turned off by lots of content suggesting otherwise. Also, I’m not sure “risk of abuse” is the right term. I’ve argued that some parts of EA grantmaking are too dependent on relationships and networks, but I’m much more concerned about unintentional biases than the kind of overt (and unwarranted) favoritism that “risk of abuse” implies. Maybe “risk of bias”?
To be clear, I am claiming that the section you are linking is not very predictive of how I expect CEA to classify our grants, and is not very predictive of the attitudes that I have seen from CEA and other stakeholders and donors of the funds, in terms of whether they will have an intuitive sense that a grant is “risky”. Indeed, I think that page is kind of misleading and think we should probably rewrite it.
I am concretely claiming that both CEA’s attitudes, the attitudes of various stakeholders, and most donors attitudes is better predicted by the “risk of abuse” framing I have outlined. In that sense, I disagree with you that most donors will be primarily concerned about the kind of risk that is discussed on the EA Funds page.
Obviously, I do still think there is a place for considering something more like “variance of impact”, but I don’t actually think that that dimension has played a large role in people’s historical reactions to grants we have made, and I don’t expect it to matter too much in the future. I think in terms of impact, most people I have interacted with tend to be relatively risk-neutral when it comes to their altruistic impact (and I don’t know of any good arguments for why someone should be risk-averse in their altruistic activities, since the case for diminishing marginal returns at the scales on which our grants tend to influence things seems pretty weak).
Edit: To give a more concrete example here, I think by far the grant that has been classified as the “riskiest” grant we have made, that from what I can tell has been motivating a lot of the split into “high risk” and “medium risk” grants, is our grant to Lauren Lee. That grant does not strike me as having a large downside risk, and I don’t think anyone I’ve talked to has suggested that this is the case. The risk that people have talked about is the risk of abuse that I have been talking about, and associated public relations risks, and many have critiqued the grant as “the Long Term Future Fund giving money to their friends”, which highlights to me the dimension of abuse risk much more concretely than the dimension of high variance.
In addition to that, grants that operate a higher level of meta than other grants, i.e. which tend to facilitate recruitment, training or various forms of culture-development, have not been broadly described as “risky” to me, even though from a variance perspective those kinds of grants are almost always much higher variance than the object-level activities that they tend to support (since their success and failure is dependent on the success of the object-level activities). Which again strikes me as strong evidence that variance of impact (which seems to be the perspective that the EA Funds materials appear to take) is not a good predictor of how people classify the grants.
Relatedly, I don’t recall anyone pointing out that funding a large number of ‘risky’ individuals, instead of a small number of ‘safe’ organisations, might be less risky (in the sense of lower variance), because the individual risks are largely independent, so you get a lot of portfolio diversification.
If risk of abuse really is the big concern for most stakeholders, then I agree rewriting the risk page would make a lot of sense. Since that’s a fairly new page, I’d assumed it incorporated current thinking/feedback.
*nods* This perspective is currently still very new to me, and I’ve only briefly talked about it to people at CEA and other fund members. My sense was that people found the “risk of abuse” framing to resonate a good amount, but this perspective is definitely in no way consensus of the current fund-stakeholders, and is only the best way I can currently make sense of the constraints the fund is facing. I don’t know yet to what degree others will find this perspective compelling.
I don’t think anyone made a mistake by writing the current risk-page, which I think was an honest and good attempt at trying to explain a bunch of observations and perspectives. I just think I now have a better model that I would prefer to use instead.