I agree with you that on one framing, influencing the long-run future is risky, in the sense that we have no real idea of whether any actions taken now will have a long-run positive impact, and we’re just using our best judgement.
However, it also feels like there are also important distinctions in categories of risk between things like organisational maturity. For example, a grant to MIRI (an established organisation, with legible financial controls, and existing research outputs that are widely cited within the field) feels different to me when compared to, say, an early-career independent researcher working on an area of mathematics that’s plausibly but as-yet speculatively related to advancing the cause of AI safety, or funding someone to write fiction that draws attention to key problems in the field.
I basically tried to come up with an ontology that would make intuitive sense to the average donor, and then tried to address the shortcomings by using examples on our risk page. I agree with Oli that it doesn’t fully capture things, but I think it’s a reasonable attempt to capture an important sentiment (albeit in a very reductive way), especially for donors who are newer to the product and to EA. That said, everyone will have their own sense of what they consider too risky, which is why we encourage donors to read through past grant reports and see how comfortable they feel before donating.
The conversation with Oli above about ‘risk of abuse’ being an important dimension is interesting, and I’ll think about rewriting parts of the page to account for different framings of risk.
I basically tried to come up with an ontology that would make intuitive sense to the average donor, and then tried to address the shortcomings by using examples on our risk page. I agree with Oli that it doesn’t fully capture things, but I think it’s a reasonable attempt to capture an important sentiment (albeit in a very reductive way), especially for donors who are newer to the product and to EA.
Yeah, I think the current ontology is a pretty reasonable/intuitive way to address a complex issue. I’d update if I learned that concerns about “risk of abuse” more common among donors than concerns about other types of risk, but my suspicion is that concerns about “risk of abuse” is mostly an issue for the LTFF since it makes more grants to individuals and the grant that was recommended to Lauren serves as something of a lightning rod.
I do think, per my original question about the LTFF’s classification, that the LTFF is meaningfully more risky than the other funds along multiple dimensions of risk: relatively more funding of individuals vs. established organizations, more convoluted paths to impact (even for more established grantees), and more risk of abuse (largely due to funding more individuals and perhaps a less consensus based grantmaking process).
everyone will have their own sense of what they consider too risky, which is why we encourage donors to read through past grant reports and see how comfortable they feel before donating.
Now that the new Grantmaking and Impact section lists illustrative grants for each fund, I expect donors will turn to that section rather than clicking through each grant report and trying to mentally aggregate the results. But as I pointed out in another discussion, that section is problematic in that the grants it lists are often misrepresentative and/or incorrect, and even if it were accurate to begin with the information would quickly grow stale.
As a solution (which otherpeople seemed interested in), I suggested a spreadsheet that would list and categorize grants. If I created such a spreadsheet, would you be willing to embed it in the fund pages and keep it up to date as new grants are made? The maintenance is the kind of thing a (paid?) secretariat could help with.
I agree with you that on one framing, influencing the long-run future is risky, in the sense that we have no real idea of whether any actions taken now will have a long-run positive impact, and we’re just using our best judgement.
However, it also feels like there are also important distinctions in categories of risk between things like organisational maturity. For example, a grant to MIRI (an established organisation, with legible financial controls, and existing research outputs that are widely cited within the field) feels different to me when compared to, say, an early-career independent researcher working on an area of mathematics that’s plausibly but as-yet speculatively related to advancing the cause of AI safety, or funding someone to write fiction that draws attention to key problems in the field.
I basically tried to come up with an ontology that would make intuitive sense to the average donor, and then tried to address the shortcomings by using examples on our risk page. I agree with Oli that it doesn’t fully capture things, but I think it’s a reasonable attempt to capture an important sentiment (albeit in a very reductive way), especially for donors who are newer to the product and to EA. That said, everyone will have their own sense of what they consider too risky, which is why we encourage donors to read through past grant reports and see how comfortable they feel before donating.
The conversation with Oli above about ‘risk of abuse’ being an important dimension is interesting, and I’ll think about rewriting parts of the page to account for different framings of risk.
Yeah, I think the current ontology is a pretty reasonable/intuitive way to address a complex issue. I’d update if I learned that concerns about “risk of abuse” more common among donors than concerns about other types of risk, but my suspicion is that concerns about “risk of abuse” is mostly an issue for the LTFF since it makes more grants to individuals and the grant that was recommended to Lauren serves as something of a lightning rod.
I do think, per my original question about the LTFF’s classification, that the LTFF is meaningfully more risky than the other funds along multiple dimensions of risk: relatively more funding of individuals vs. established organizations, more convoluted paths to impact (even for more established grantees), and more risk of abuse (largely due to funding more individuals and perhaps a less consensus based grantmaking process).
Now that the new Grantmaking and Impact section lists illustrative grants for each fund, I expect donors will turn to that section rather than clicking through each grant report and trying to mentally aggregate the results. But as I pointed out in another discussion, that section is problematic in that the grants it lists are often misrepresentative and/or incorrect, and even if it were accurate to begin with the information would quickly grow stale.
As a solution (which other people seemed interested in), I suggested a spreadsheet that would list and categorize grants. If I created such a spreadsheet, would you be willing to embed it in the fund pages and keep it up to date as new grants are made? The maintenance is the kind of thing a (paid?) secretariat could help with.