I’d like to challenge the downside estimate re: HPMoR distribution funding.
So I felt comfortable recommending this grant, especially given its relatively limited downside
I think that funding this project comes with potentially significant PR and reputational risk, especially considering the goals for the fund. It seems like it might be a much better fit for the Meta fund, rather than for the fund that aims to: “support organizations that work on improving long-term outcomes for humanity”.
Could you say a bit more about what kind of PR and reputational risks you are imagining? Given that the grant is done in collaboration with the IMO and EGMO organizers, who seem to have read the book themselves and seem to be excited about giving it out as a prize, I don’t think I understand what kind of reputational risks you are worried about.
I am not OP but as someone who also has (minor) concerns under this heading:
Some people judge HPMoR to be of little artistic merit/low aesthetic quality
Some people find the subcultural affiliations of HPMoR off-putting (fanfiction in general, copious references to other arguably low-status fandoms)
If the recipients have negative impressions of HPMoR for reasons like the above, that could result in (unnecessarily) negative impressions of rationality/EA.
Clearly, there also many people that like HPMoR and don’t have the above concerns. The key question is probably what fraction of recipients will have positive, neutral and negative reactions.
Hmm, so my model is that the books are given out without significant EA affiliation, together with a pamphlet for SPARC and ESPR. I also know that HPMoR is already relatively widely known among math olympiad participants. Those together suggest that it’s unlikely this would cause much reputational damage to the EA community, given that none of this contains an explicit reference to the EA community (and shouldn’t, as I have argued below).
The outcome might be that some people might start disliking HPMoR, but that doesn’t seem super bad and of relatively little downside. Maybe some people will start disliking CFAR, though I think CFAR on net benefits a lot more from having additional people who are highly enthusiastic about it, than it suffers from people who kind-of dislike it.
I have some vague feeling that there might be some more weird downstream effects of this, but I don’t think I have any concrete models of how they might happen, and would be interested in hearing more of people’s concerns.
(Responding to the second point about which fund is a better fit for this, will respond to the first point separately)
I am broadly confused how to deal with the “which fund is a better fit?” question. Since it’s hard to influence the long-term future I expect a lot of good interventions to go via the path of first introducing people to the community, building institutions that can improve our decision-making, and generally opting for building positive feedback loops and resources that we can deploy as soon as concrete opportunities show up.
My current guess is that we should check in with the Meta fund and their grants to make sure that we don’t make overlapping grants and that we communicate any concerns, but that as soon as there is an application that we think is worth it from the perspective of the long-term-future that the Meta fund is not covering, that we should feel comfortable filling it, independently of whether it looks a bit like EA-Meta. But I am open to changing my mind on this.
Could this be straightforwardly simplified by bracketing out far future meta work as within the remit of the Long Term Future Fund, and all other meta work (e.g. animal welfare institution-building, global development institution-building) as within the remit of the Meta Fund?
Not sure if that would cleave reality at the joints, but seems like it might.
I actually think that as long as you communicate potential downside risks, there is a lot of value in having independent granting bodies look over the same pool of applications.
I think a single granting body is likely to end up missing a large number of good opportunities, and general intuitions arounds hits-based giving make me think that encouraging independence here is better than splitting up every grant into only one domain (this does rely on those granting bodies being able to communicate clearly around downside risk, which I think we can achieve).
Is this different from having more people on a single granting body?
Possibly with more people on a single granting body, everyone talks to each other more and so can all get stuck thinking the same thing, whereas they would have come up with more / different considerations had they been separate. But this would suggest that granting bodies would benefit from splitting into halves, going over grants individually, and then merging at the end. Would you endorse that suggestion?
I don’t think you want to go below three people for a granting body, to make sure that you can catch all the potential negative downsides of a grant. My guess is that if you have 6 or more people it would be better to split it into two independent grant teams.
I actually think that as long as you communicate potential downside risks, there is a lot of value in having independent granting bodies look over the same pool of applications.
Yes, this is a great idea to help reduce bias in grantmaking.
I’d like to challenge the downside estimate re: HPMoR distribution funding.
I think that funding this project comes with potentially significant PR and reputational risk, especially considering the goals for the fund. It seems like it might be a much better fit for the Meta fund, rather than for the fund that aims to: “support organizations that work on improving long-term outcomes for humanity”.
Could you say a bit more about what kind of PR and reputational risks you are imagining? Given that the grant is done in collaboration with the IMO and EGMO organizers, who seem to have read the book themselves and seem to be excited about giving it out as a prize, I don’t think I understand what kind of reputational risks you are worried about.
I am not OP but as someone who also has (minor) concerns under this heading:
Some people judge HPMoR to be of little artistic merit/low aesthetic quality
Some people find the subcultural affiliations of HPMoR off-putting (fanfiction in general, copious references to other arguably low-status fandoms)
If the recipients have negative impressions of HPMoR for reasons like the above, that could result in (unnecessarily) negative impressions of rationality/EA.
Clearly, there also many people that like HPMoR and don’t have the above concerns. The key question is probably what fraction of recipients will have positive, neutral and negative reactions.
Hmm, so my model is that the books are given out without significant EA affiliation, together with a pamphlet for SPARC and ESPR. I also know that HPMoR is already relatively widely known among math olympiad participants. Those together suggest that it’s unlikely this would cause much reputational damage to the EA community, given that none of this contains an explicit reference to the EA community (and shouldn’t, as I have argued below).
The outcome might be that some people might start disliking HPMoR, but that doesn’t seem super bad and of relatively little downside. Maybe some people will start disliking CFAR, though I think CFAR on net benefits a lot more from having additional people who are highly enthusiastic about it, than it suffers from people who kind-of dislike it.
I have some vague feeling that there might be some more weird downstream effects of this, but I don’t think I have any concrete models of how they might happen, and would be interested in hearing more of people’s concerns.
Not the book giveaway itself, but posting grant information like this can be very bad PR.
I think I agree, but why do you think so?
I’ve seen it happen. A grant like this should either not be made, or made in private. Regardless of how well people behave themselves on this forum.
(Responding to the second point about which fund is a better fit for this, will respond to the first point separately)
I am broadly confused how to deal with the “which fund is a better fit?” question. Since it’s hard to influence the long-term future I expect a lot of good interventions to go via the path of first introducing people to the community, building institutions that can improve our decision-making, and generally opting for building positive feedback loops and resources that we can deploy as soon as concrete opportunities show up.
My current guess is that we should check in with the Meta fund and their grants to make sure that we don’t make overlapping grants and that we communicate any concerns, but that as soon as there is an application that we think is worth it from the perspective of the long-term-future that the Meta fund is not covering, that we should feel comfortable filling it, independently of whether it looks a bit like EA-Meta. But I am open to changing my mind on this.
Could this be straightforwardly simplified by bracketing out far future meta work as within the remit of the Long Term Future Fund, and all other meta work (e.g. animal welfare institution-building, global development institution-building) as within the remit of the Meta Fund?
Not sure if that would cleave reality at the joints, but seems like it might.
I actually think that as long as you communicate potential downside risks, there is a lot of value in having independent granting bodies look over the same pool of applications.
I think a single granting body is likely to end up missing a large number of good opportunities, and general intuitions arounds hits-based giving make me think that encouraging independence here is better than splitting up every grant into only one domain (this does rely on those granting bodies being able to communicate clearly around downside risk, which I think we can achieve).
Is this different from having more people on a single granting body?
Possibly with more people on a single granting body, everyone talks to each other more and so can all get stuck thinking the same thing, whereas they would have come up with more / different considerations had they been separate. But this would suggest that granting bodies would benefit from splitting into halves, going over grants individually, and then merging at the end. Would you endorse that suggestion?
I don’t think you want to go below three people for a granting body, to make sure that you can catch all the potential negative downsides of a grant. My guess is that if you have 6 or more people it would be better to split it into two independent grant teams.
Yes, this is a great idea to help reduce bias in grantmaking.