I do not believe the $28,000 grant to buy copies of HPMOR meets the evidential standard demanded by effective altruism. “Effective altruism is about answering one simple question: how can we use our resources to help others the most? Rather than just doing what feels right, we use evidence and careful analysis to find the very best causes to work on.” With all due respect, it seems to me that this grant feels right but lacks evidence and careful analysis.
The Effective Altruism Funds are “for maximizing the effectiveness of your donations” according to the homepage. This grant’s claim that buying copies of HPMOR is among the most effective ways to donate $28,000 by way of improving the long-term future rightly demands a high standard of evidence.
You make two principal arguments in justifying the grant. First, the books will encourage the Math Olympiad winners to join the EA community. Second, the book swill teach the Math Olympiad winners important reasoning skills.
If the goal is to encourage Math Olympiad winners to join the Effective Altruism community, why are they being given a book that has little explicitly to do with Effective Altruism? The Life You Can Save, Doing Good Better, and _80,000 Hours_are three books much more relevant to Effective Altruism than Harry Potter and the Methods of Rationality. Furthermore, they are much cheaper than the $43 per copy of HPMOR. Even if one is to make the argument that HPMOR is more effective at encouraging Effective Altruism — which I doubt and is substantiated nowhere — one also has to go further and provide evidence that the difference in cost of each copy of HPMOR relative to any of the other books I mentioned is justified. It is quite possible that sending the Math Olympiad winners a link to Peter Singer’s TED Talk, “The why and how of effective altruism”, is more effective than HPMOR in encouraging effective altruism. It is also free!
If the goal is to teach Math Olympiad winners important reasoning skills, then I question this goal. They just won the Math Olympiad. If any group of people already had well developed logic and reasoning skills, it would be them. I don’t doubt that they already have a strong grasp of Bayes’ rule.
I also want to point out that the fact that EA Russia has made oral agreements to give copies of the book before securing funding is deeply unsettling, if I understand the situation correctly. Why are promises being made in advance of having funding secured? This is not how a well-run organization or movement operates. If EA Russia did have funding to buy the books and this grant is displacing that funding, then what will EA Russia spend the original $28,000 on? This information is necessary to evaluate the effectiveness of this grant and should not be absent.
I have no idea who Mikhail Yagudin is so have no reason to suspect anything untoward, but the fact that you do not know him or his team augments this grant’s problems, as you are aware.
I understand that the EA Funds are thought of as vehicles to fund higher risk and more uncertain causes. In the words of James Snowden and Elie Hassenfeld, “some donors give to this fund because they want to signal support for GiveWell making grants which are more difficult to justify and rely on more subjective judgment calls, but have the potential for greater impact than our top charities.” They were referring to GiveWell and the Global Health and Development Fund, but I think you would agree that this appetite for riskier donations applies to the other funds, including this Long Term Future Fund.
However, higher risk and uncertainty does not mean no evidentiary standards at all. In fact, uncertain grants such as this one should be accompanied with an abundance of strong intuitive reasoning if there is no empirical evidence to draw from. The reasoning outlined in the forum post does not meet the standard in my view for the reasons I gave in the prior paragraphs.
More broadly, I think this grant would hurt the EA community. Returning to the quote I began with, “Effective altruism is about answering one simple question: how can we use our resources to help others the most? Rather than just doing what feels right, we use evidence and careful analysis to find the very best causes to work on.” If I were a newcomer to the EA community and I saw this grant and the associated rationale, I would be utterly disenchanted by the entire movement. I would rightly doubt that this is among the most effective ways to spend $28,000 to improve the long term future and notice the absence of “evidence and careful analysis”. If effective altruism does not demand greater rigor than other charities, then there is no reason for a newcomer to join the effective altruism movement.
So what should be done?
This grant should be directed elsewhere. EA Russia can find other funding to meet its oral promise that should not have been given without already having funding.
EA Funds cannot both be a vehicle for riskier donations as well as the go-to recommendation for effective donations, as is stated in the Introduction to Effective Altruism. This flies in the face of transparency for what a newcomer would expect when donating. This is not the fault of this grant but the grant is emblematic of this broader problem. I also want to reiterate that I think this grant still does not meet the evidentiary standard, even when it is considered under the view of EA Funds as a vehicle for riskier donations.
In this comment I want to address the following paragraph (#3).
I also want to point out that the fact that EA Russia has made oral agreements to give copies of the book before securing funding is deeply unsettling, if I understand the situation correctly. Why are promises being made in advance of having funding secured? This is not how a well-run organization or movement operates. If EA Russia did have funding to buy the books and this grant is displacing that funding, then what will EA Russia spend the original $28,000 on? This information is necessary to evaluate the effectiveness of this grant and should not be absent.
I think that it is a miscommunication on my side.
EA Russia has the oral agreements with [the organizers of math olympiads]...
We contacted organizers of math olympiads and asked them whether they would like to have HPMoRs as a prize (conditioned on us finding a sponsor). We didn’t promise anything to them, and they do not expect anything from us. Also, I would like to say that we hadn’t approached them as the EAs (as I am mindful of the reputational risks).
In this comment I want to address the following paragraph (related to #2).
If the goal is to encourage Math Olympiad winners to join the Effective Altruism community, why are they being given a book that has little explicitly to do with Effective Altruism? The Life You Can Save, Doing Good Better, and _80,000 Hours_are three books much more relevant to Effective Altruism than Harry Potter and the Methods of Rationality. Furthermore, they are much cheaper than the $43 per copy of HPMOR. Even if one is to make the argument that HPMOR is more effective at encouraging Effective Altruism — which I doubt and is substantiated nowhere — one also has to go further and provide evidence that the difference in cost of each copy of HPMOR relative to any of the other books I mentioned is justified. It is quite possible that sending the Math Olympiad winners a link to Peter Singer’s TED Talk, “The why and how of effective altruism”, is more effective than HPMOR in encouraging effective altruism. It is also free!
a. While I agree that the books you’ve mentioned are more directly related to EA than HPMoR. I think it would not be possible to give them as a prize. I think the fact that the organizers whom we contacted had read HPMoR significantly contributed to the possibility to give anything at all.
b. I share your concern about HPMoR not being EA enough. We hope to mitigate it via leaflet + SPARC/ESPR.
I think this comment suggests there’s a wide inferential gap here. Let me see if I can help bridge it a little.
If the goal is to teach Math Olympiad winners important reasoning skills, then I question this goal. They just won the Math Olympiad. If any group of people already had well developed logic and reasoning skills, it would be them. I don’t doubt that they already have a strong grasp of Bayes’ rule.
I feel fairly strongly that this goal is still important. I think that the most valuable resource that the EA/rationality/LTF community has is the ability to think clearly about important questions. Nick Bostrom advises politicians, tech billionaires, and the founders of the leading AI companies, and it’s not because he has the reasoning skills of a typical math olympiad. There are many levels of skill, and Nick Bostrom’s is much higher[1].
It seems to me that these higher level skills are not easily taught, even to the brightest minds. Notice how society’s massive increase in the number of scientists has failed to produce anything like linearly more deep insights. I have seen this for myself at Oxford University, where many of my fellow students could compute very effectively but could not then go on to use that math in a practical application, or even understand precisely what it was they’d done. The author, Eliezer Yudkowsky, is a renowned explainer of scientific reasoning, and HPMOR is one of his best works for this. See the OP for more models of what HPMOR does especially right here.
In general I think someone’s ability to think clearly, in spite of the incentives around them, is one of the main skills required for improving the world, much more so than whether they have a community affiliation with EA [2]. I don’t think that any of the EA materials you mention helps people gain this skill. But I think for some people, HPMOR does.
I’m focusing here on the claim that the intent of this grant is unfounded. To help communicate my perspective here, when I look over the grants this feels to me like one of the ‘safest bets’. I am interested to know whether this perspective makes the grant’s intent feel more reasonable to anyone reading who initially felt pretty blindsided by it.
---
[1] I am not sure exactly how widespread this knowledge is. Let me just say that it’s not Bostrom’s political skills that got him where he is. When the future-head-of-IARPA decided to work at FHI, Bostrom’s main publication was a book on anthropics. I think Bostrom did excellent work on important problems, and this is the primary thing that has drawn people to work with and listen to him.
[2] Although I think being in these circles changes your incentives, which is another way to get someone to do useful work. Though again I think the first part is more important to get people to do the useful work you’ve not already figured out how to incentivise—I don’t think we’ve figured it all out yet.
Thanks for your long critique! I will try to respond to as much of it as I can.
As I see it, there are four separate claims in your comment, each of which warrants a separate response:
1. The Long-Term Future Fund should make all of its giving based on a high standard of externally transparent evidence
2. Receiving HPMoRs is unlikely to cause the math olympiad participants to start working on the long-term future, or engage with the existing EA community
3. EA Russia has made an oral promise of delivering HPMoRs without having secured external funding first
4. If the Long-Term Future Fund is making grants that are this risky, they should not be advertised as the go-to vehicle for donations
I will start responding to some of them now, but please let me know if the above summary of your claims seems wrong.
I don’t think that 2) really captures the objection the way I read it. It seems that on margin, there are much more cost effective ways of engaging math olympiad participants, and that the content distributed could be much more directly EA/AI related at lower cost than distributing 2000 pages of hard copy HPMoR.
I don’t think anyone should be trying to persuade IMO participants to join the EA community, and I also don’t think giving them “much more directly EA content” is a good idea.
I would prefer Math Olympiad winners to think about long-term, think better, and think independently, than to “join the EA community”. HPMoR seems ok because it is not a book trying to convince you to join a community, but mostly a book about ways how to think, and a good read.
(If they readers eventually become EAs after reasoning independently, it’s likely good; if they for example come to the conclusion there are mayor flaws in EA and it’s better to engage with the movement critically, it’s also good.)
I do think there is value in showing them that there exists a community that cares a lot about the long-term-future, and do think there is some value in them collaborating with that community instead of going off and doing their own thing, but the first priority should be to help them think better and about the long-term at all.
I think none of the other proposed books achieve this very well.
Hello, first of all, thank you for engaging with my critique. I have some clarifications for your summary of my claims.
Ideally, yes. If there is a lack of externally transparent evidence, there should be strong reasoning in favor of the grant.
I think that there is no evidence that using $28k to purchase copies of HPMOR is the most cost-effective way to encourage Math Olympiad participants to work on the long-term future or engage with the existing community. I don’t make the claim that it won’t be effective at all. Simply that there is little reason to believe it will be more effective, either in an absolute sense or in a cost-effectiveness sense, than other resources.
I’m not sure about this, but this was the impression the forum post gave me. If this is not the case, then, as I said, this grant displaces some other $28k in funding. What will that other $28k go to?
Not necessarily that risky funds shouldn’t be recommended as go-to, although that would be one way of resolving the issue. My main problem is that it is not abundantly clear that the Funds often make risky grants, so there is a lack of transparency for an EA newcomer. And while this particularly applies to the Long Term fund, given it is harder to have evidence concerning the Long Term, it does apply to all the other funds.
Sorry for the delay, others seem to have given a lot of good responses in the meantime, but here is my current summary of those concerns:
1. Ideally, yes. If there is a lack of externally transparent evidence, there should be strong reasoning in favor of the grant.
By word-count the HPMOR writeup is (I think) among the three longest writeups that I produced for this round of grant proposals. I think my reasoning is sufficiently strong, though it is obviously difficult for me to comprehensively explain all of my background models and reasoning in a way that allows you to verify that.
The core arguments that I provided in the writeup above seem sufficiently strong to me, not necessarily to convince a completely independent observer, but I think for someone with context about community building and general work done on the long-term future, I expect it to successfully communicate the actual reasons for why I think the grant is a good idea.
I generally think grantmakers should give grants to whatever interventions they think are likely to be most effective, while not constraining themselves to only account for evidence that is easily communicable to other people. They then should also invest significant resources into communicating whatever can be communicated about their reasons and intuitions and actively seek out counterarguments and additional evidence that would change their mind.
2. I think that there is no evidence that using $28k to purchase copies of HPMOR is the most cost-effective way to encourage Math Olympiad participants to work on the long-term future or engage with the existing community. I don’t make the claim that it won’t be effective at all. Simply that there is little reason to believe it will be more effective, either in an absolute sense or in a cost-effectiveness sense, than other resources.
This one has mostly been answered by other people in the thread, but here is my rough summary of my thoughts on this objection:
I don’t think the aim of this grant should be “to recruit IMO and EGMO winners into the EA community”. I think membership in the EA community is of relatively minor importance compared to helping them get traction in thinking about the long-term-future, teach them about basic thinking tools and give them opportunities to talk to others who have similar interests.
I think from an integrity perspective it would be actively bad to try to persuade young high-school students to join the community. HPMoR is a good book to give because some of the IMO and EGMO organizers have read the book and found it interesting on its own merit, and would be glad to receive it as a gift. I don’t think any of the other books you proposed would be received in the same way and I think are much more likely to be received as advocacy material that is trying to recruit them to some kind of in-group.
Jan’s comment summarized the concerns I have here reasonably well.
As Misha said, this grant is possible because the IMO and EGMO organizers are excited about giving out HPMoRs as prizes. It is not logistically feasible to give out other material that the organizers are not excited about (and I would be much less excited about a grant that would not go through the organizers of these events)
As Ben Pace said, I think HPMoR teaches skills that math olympiad winners lack. I am confident of this both because I have participated in SPARC events that tried to teach those skills to math olympiad winners, and because impact via intellectual progress is very heavy-tailed and the absolutely best people tend to have a massively outsized impact with their contributions. Improving the reasoning and judgement ability of some of the best people on the planet strikes me as quite valuable.
3. I’m not sure about this, but this was the impression the forum post gave me. If this is not the case, then, as I said, this grant displaces some other $28k in funding. What will that other $28k go to?
Misha responded to this. There is no $28k that this grant is displacing, the counterfactual is likely that there simply wouldn’t be any books given out at IMO or EGMO. All the organizers did was to ask whether they would be able to give out prizes, conditional on them finding someone to sponsor them. I don’t see any problems with this.
4. Not necessarily that risky funds shouldn’t be recommended as go-to, although that would be one way of resolving the issue. My main problem is that it is not abundantly clear that the Funds often make risky grants, so there is a lack of transparency for an EA newcomer. And while this particularly applies to the Long Term fund, given it is harder to have evidence concerning the Long Term, it does apply to all the other funds.
My guess is that most of our donors would prefer us to feel comfortable making risky grants, but I am not confident of this. Our grant page does list the following under the section of: “Why might you choose to not donate to this fund?”
First, donors who prefer to support established organizations. The fund managers have a track record of funding newer organizations and this trend is likely to continue, provided that promising opportunities continue to exist.
This is the first and top reason we list why someone might not want to donate to this fund. This doesn’t necessarily directly translate into risky grants, but I think does communicate that we are trying to identify early-stage opportunities that are not necessarily associated with proven interventions and strong track-records.
From a communication perspective, one of the top reasons why I invested so much time into this grant writeup is to be transparent about what kind of intervention we are likely to fund, and to help donors decide whether they want to donate to this fund. At least I will continue advocating for early-stage and potentially weird looking grants as long as I am part of the LTF-board and donors should know about that. If you have any specific proposed wording, I am also open to suggesting to the rest of the fund-team that we should update our fund-page with that wording.
Mr. Habryka,
I do not believe the $28,000 grant to buy copies of HPMOR meets the evidential standard demanded by effective altruism. “Effective altruism is about answering one simple question: how can we use our resources to help others the most? Rather than just doing what feels right, we use evidence and careful analysis to find the very best causes to work on.” With all due respect, it seems to me that this grant feels right but lacks evidence and careful analysis.
The Effective Altruism Funds are “for maximizing the effectiveness of your donations” according to the homepage. This grant’s claim that buying copies of HPMOR is among the most effective ways to donate $28,000 by way of improving the long-term future rightly demands a high standard of evidence.
You make two principal arguments in justifying the grant. First, the books will encourage the Math Olympiad winners to join the EA community. Second, the book swill teach the Math Olympiad winners important reasoning skills.
If the goal is to encourage Math Olympiad winners to join the Effective Altruism community, why are they being given a book that has little explicitly to do with Effective Altruism? The Life You Can Save, Doing Good Better, and _80,000 Hours_are three books much more relevant to Effective Altruism than Harry Potter and the Methods of Rationality. Furthermore, they are much cheaper than the $43 per copy of HPMOR. Even if one is to make the argument that HPMOR is more effective at encouraging Effective Altruism — which I doubt and is substantiated nowhere — one also has to go further and provide evidence that the difference in cost of each copy of HPMOR relative to any of the other books I mentioned is justified. It is quite possible that sending the Math Olympiad winners a link to Peter Singer’s TED Talk, “The why and how of effective altruism”, is more effective than HPMOR in encouraging effective altruism. It is also free!
If the goal is to teach Math Olympiad winners important reasoning skills, then I question this goal. They just won the Math Olympiad. If any group of people already had well developed logic and reasoning skills, it would be them. I don’t doubt that they already have a strong grasp of Bayes’ rule.
I also want to point out that the fact that EA Russia has made oral agreements to give copies of the book before securing funding is deeply unsettling, if I understand the situation correctly. Why are promises being made in advance of having funding secured? This is not how a well-run organization or movement operates. If EA Russia did have funding to buy the books and this grant is displacing that funding, then what will EA Russia spend the original $28,000 on? This information is necessary to evaluate the effectiveness of this grant and should not be absent.
I have no idea who Mikhail Yagudin is so have no reason to suspect anything untoward, but the fact that you do not know him or his team augments this grant’s problems, as you are aware.
I understand that the EA Funds are thought of as vehicles to fund higher risk and more uncertain causes. In the words of James Snowden and Elie Hassenfeld, “some donors give to this fund because they want to signal support for GiveWell making grants which are more difficult to justify and rely on more subjective judgment calls, but have the potential for greater impact than our top charities.” They were referring to GiveWell and the Global Health and Development Fund, but I think you would agree that this appetite for riskier donations applies to the other funds, including this Long Term Future Fund.
However, higher risk and uncertainty does not mean no evidentiary standards at all. In fact, uncertain grants such as this one should be accompanied with an abundance of strong intuitive reasoning if there is no empirical evidence to draw from. The reasoning outlined in the forum post does not meet the standard in my view for the reasons I gave in the prior paragraphs.
More broadly, I think this grant would hurt the EA community. Returning to the quote I began with, “Effective altruism is about answering one simple question: how can we use our resources to help others the most? Rather than just doing what feels right, we use evidence and careful analysis to find the very best causes to work on.” If I were a newcomer to the EA community and I saw this grant and the associated rationale, I would be utterly disenchanted by the entire movement. I would rightly doubt that this is among the most effective ways to spend $28,000 to improve the long term future and notice the absence of “evidence and careful analysis”. If effective altruism does not demand greater rigor than other charities, then there is no reason for a newcomer to join the effective altruism movement.
So what should be done?
This grant should be directed elsewhere. EA Russia can find other funding to meet its oral promise that should not have been given without already having funding.
EA Funds cannot both be a vehicle for riskier donations as well as the go-to recommendation for effective donations, as is stated in the Introduction to Effective Altruism. This flies in the face of transparency for what a newcomer would expect when donating. This is not the fault of this grant but the grant is emblematic of this broader problem. I also want to reiterate that I think this grant still does not meet the evidentiary standard, even when it is considered under the view of EA Funds as a vehicle for riskier donations.
Dear Morgan,
In this comment I want to address the following paragraph (#3).
I think that it is a miscommunication on my side.
We contacted organizers of math olympiads and asked them whether they would like to have HPMoRs as a prize (conditioned on us finding a sponsor). We didn’t promise anything to them, and they do not expect anything from us. Also, I would like to say that we hadn’t approached them as the EAs (as I am mindful of the reputational risks).
Dear Morgan,
In this comment I want to address the following paragraph (related to #2).
a. While I agree that the books you’ve mentioned are more directly related to EA than HPMoR. I think it would not be possible to give them as a prize. I think the fact that the organizers whom we contacted had read HPMoR significantly contributed to the possibility to give anything at all.
b. I share your concern about HPMoR not being EA enough. We hope to mitigate it via leaflet + SPARC/ESPR.
I think this comment suggests there’s a wide inferential gap here. Let me see if I can help bridge it a little.
I feel fairly strongly that this goal is still important. I think that the most valuable resource that the EA/rationality/LTF community has is the ability to think clearly about important questions. Nick Bostrom advises politicians, tech billionaires, and the founders of the leading AI companies, and it’s not because he has the reasoning skills of a typical math olympiad. There are many levels of skill, and Nick Bostrom’s is much higher[1].
It seems to me that these higher level skills are not easily taught, even to the brightest minds. Notice how society’s massive increase in the number of scientists has failed to produce anything like linearly more deep insights. I have seen this for myself at Oxford University, where many of my fellow students could compute very effectively but could not then go on to use that math in a practical application, or even understand precisely what it was they’d done. The author, Eliezer Yudkowsky, is a renowned explainer of scientific reasoning, and HPMOR is one of his best works for this. See the OP for more models of what HPMOR does especially right here.
In general I think someone’s ability to think clearly, in spite of the incentives around them, is one of the main skills required for improving the world, much more so than whether they have a community affiliation with EA [2]. I don’t think that any of the EA materials you mention helps people gain this skill. But I think for some people, HPMOR does.
I’m focusing here on the claim that the intent of this grant is unfounded. To help communicate my perspective here, when I look over the grants this feels to me like one of the ‘safest bets’. I am interested to know whether this perspective makes the grant’s intent feel more reasonable to anyone reading who initially felt pretty blindsided by it.
---
[1] I am not sure exactly how widespread this knowledge is. Let me just say that it’s not Bostrom’s political skills that got him where he is. When the future-head-of-IARPA decided to work at FHI, Bostrom’s main publication was a book on anthropics. I think Bostrom did excellent work on important problems, and this is the primary thing that has drawn people to work with and listen to him.
[2] Although I think being in these circles changes your incentives, which is another way to get someone to do useful work. Though again I think the first part is more important to get people to do the useful work you’ve not already figured out how to incentivise—I don’t think we’ve figured it all out yet.
Thanks for your long critique! I will try to respond to as much of it as I can.
As I see it, there are four separate claims in your comment, each of which warrants a separate response:
1. The Long-Term Future Fund should make all of its giving based on a high standard of externally transparent evidence
2. Receiving HPMoRs is unlikely to cause the math olympiad participants to start working on the long-term future, or engage with the existing EA community
3. EA Russia has made an oral promise of delivering HPMoRs without having secured external funding first
4. If the Long-Term Future Fund is making grants that are this risky, they should not be advertised as the go-to vehicle for donations
I will start responding to some of them now, but please let me know if the above summary of your claims seems wrong.
I don’t think that 2) really captures the objection the way I read it. It seems that on margin, there are much more cost effective ways of engaging math olympiad participants, and that the content distributed could be much more directly EA/AI related at lower cost than distributing 2000 pages of hard copy HPMoR.
I don’t think anyone should be trying to persuade IMO participants to join the EA community, and I also don’t think giving them “much more directly EA content” is a good idea.
I would prefer Math Olympiad winners to think about long-term, think better, and think independently, than to “join the EA community”. HPMoR seems ok because it is not a book trying to convince you to join a community, but mostly a book about ways how to think, and a good read.
(If they readers eventually become EAs after reasoning independently, it’s likely good; if they for example come to the conclusion there are mayor flaws in EA and it’s better to engage with the movement critically, it’s also good.)
Agree with this.
I do think there is value in showing them that there exists a community that cares a lot about the long-term-future, and do think there is some value in them collaborating with that community instead of going off and doing their own thing, but the first priority should be to help them think better and about the long-term at all.
I think none of the other proposed books achieve this very well.
Hello, first of all, thank you for engaging with my critique. I have some clarifications for your summary of my claims.
Ideally, yes. If there is a lack of externally transparent evidence, there should be strong reasoning in favor of the grant.
I think that there is no evidence that using $28k to purchase copies of HPMOR is the most cost-effective way to encourage Math Olympiad participants to work on the long-term future or engage with the existing community. I don’t make the claim that it won’t be effective at all. Simply that there is little reason to believe it will be more effective, either in an absolute sense or in a cost-effectiveness sense, than other resources.
I’m not sure about this, but this was the impression the forum post gave me. If this is not the case, then, as I said, this grant displaces some other $28k in funding. What will that other $28k go to?
Not necessarily that risky funds shouldn’t be recommended as go-to, although that would be one way of resolving the issue. My main problem is that it is not abundantly clear that the Funds often make risky grants, so there is a lack of transparency for an EA newcomer. And while this particularly applies to the Long Term fund, given it is harder to have evidence concerning the Long Term, it does apply to all the other funds.
Sorry for the delay, others seem to have given a lot of good responses in the meantime, but here is my current summary of those concerns:
By word-count the HPMOR writeup is (I think) among the three longest writeups that I produced for this round of grant proposals. I think my reasoning is sufficiently strong, though it is obviously difficult for me to comprehensively explain all of my background models and reasoning in a way that allows you to verify that.
The core arguments that I provided in the writeup above seem sufficiently strong to me, not necessarily to convince a completely independent observer, but I think for someone with context about community building and general work done on the long-term future, I expect it to successfully communicate the actual reasons for why I think the grant is a good idea.
I generally think grantmakers should give grants to whatever interventions they think are likely to be most effective, while not constraining themselves to only account for evidence that is easily communicable to other people. They then should also invest significant resources into communicating whatever can be communicated about their reasons and intuitions and actively seek out counterarguments and additional evidence that would change their mind.
This one has mostly been answered by other people in the thread, but here is my rough summary of my thoughts on this objection:
I don’t think the aim of this grant should be “to recruit IMO and EGMO winners into the EA community”. I think membership in the EA community is of relatively minor importance compared to helping them get traction in thinking about the long-term-future, teach them about basic thinking tools and give them opportunities to talk to others who have similar interests.
I think from an integrity perspective it would be actively bad to try to persuade young high-school students to join the community. HPMoR is a good book to give because some of the IMO and EGMO organizers have read the book and found it interesting on its own merit, and would be glad to receive it as a gift. I don’t think any of the other books you proposed would be received in the same way and I think are much more likely to be received as advocacy material that is trying to recruit them to some kind of in-group.
Jan’s comment summarized the concerns I have here reasonably well.
As Misha said, this grant is possible because the IMO and EGMO organizers are excited about giving out HPMoRs as prizes. It is not logistically feasible to give out other material that the organizers are not excited about (and I would be much less excited about a grant that would not go through the organizers of these events)
As Ben Pace said, I think HPMoR teaches skills that math olympiad winners lack. I am confident of this both because I have participated in SPARC events that tried to teach those skills to math olympiad winners, and because impact via intellectual progress is very heavy-tailed and the absolutely best people tend to have a massively outsized impact with their contributions. Improving the reasoning and judgement ability of some of the best people on the planet strikes me as quite valuable.
Misha responded to this. There is no $28k that this grant is displacing, the counterfactual is likely that there simply wouldn’t be any books given out at IMO or EGMO. All the organizers did was to ask whether they would be able to give out prizes, conditional on them finding someone to sponsor them. I don’t see any problems with this.
My guess is that most of our donors would prefer us to feel comfortable making risky grants, but I am not confident of this. Our grant page does list the following under the section of: “Why might you choose to not donate to this fund?”
This is the first and top reason we list why someone might not want to donate to this fund. This doesn’t necessarily directly translate into risky grants, but I think does communicate that we are trying to identify early-stage opportunities that are not necessarily associated with proven interventions and strong track-records.
From a communication perspective, one of the top reasons why I invested so much time into this grant writeup is to be transparent about what kind of intervention we are likely to fund, and to help donors decide whether they want to donate to this fund. At least I will continue advocating for early-stage and potentially weird looking grants as long as I am part of the LTF-board and donors should know about that. If you have any specific proposed wording, I am also open to suggesting to the rest of the fund-team that we should update our fund-page with that wording.
Thanks for the response. I don’t have the time to draft a reply this week but I’ll get back to you next week.