Yep, I saw that. I didn’t actually intend to criticize your use of the quiz, sorry if it came across that way. I just gave it a try and figured I would contribute some data.
(This doesn’t mean I agree with how 80k communicates information. I haven’t kept up at all with 80k’s writing, so I don’t have any strong opinions either way here)
I got them on basically every setting that remotely applied to me.
I think sadly pretty low, based on my current model of the time constraints of everyone, and also CEA logistical constraints.
(This is just my personal perspective and does not aim to reflect the opinions of anyone else on the LTF-Fund)
I am planning to send more feedback on this to the EA Hotel people.
I have actually broadly come around to the EA Hotel being a good idea, but at the time we made the grant decision there was a lot less evidence and writeups around, and it was those writeups by a variety of people that convinced me it is likely a good idea, with some caveats.
Yeah, that’s what I intended to say. “In the world where I come to the above opinion, I expect my crux will have been that whatever made CFAR historically work, is still working”
Will update to say “help facilitate”. Thanks for the correction!
He sure was on weird timezones during our meetings, so I think he might be both? (as in, flying between the two places)
I think that people should feel comfortable sharing their system-1 expressions, in a way that does not immediately imply judgement.
I am thinking of stuff like the non-violent communication patterns, where you structure your observation in the following steps:
1. List a set of objective observations
2. Report your experience upon making those observations
3. Then your personal interpretations of those experiences and what they imply about your model of the world
4. Your requests that follow from those models
I think it’s fine to stop part-way through this process, but that it’s generally a good idea to not skip any steps. So I think it’s fine to just list observations, and it’s fine to just list observations and then report how you feel about those things, as long as you clearly indicate that this is your experience and doesn’t necessarily involve judgement. But it’s a bad idea to immediately skip to the request/judgement step.
I will get back to you, but it will probably be a few days. It seems fairer to first send feedback to the people I said I would send private feedback too, and then come back to the public feedback requests.
I don’t get compensated, though I also don’t think compensation would make much of a difference for me or anyone else on the fund (except maybe Alex).
Everyone on the fund is basically dedicating all of their resources towards EA stuff, and is generally giving up most of their salary potential for working in EA. I don’t think it would make super much sense for us to get more money, given that we are already de-facto donating everything above a certain threshold (either literally in the case of the two Matts, or indirectly by taking a paycut and working in EA).
I think if people give more money to the fund because they come to trust the decisions of the fund more, then that seems like it would incentivize more things like this. Also if people bring up strong arguments against any of the reasoning I explained above, then that is a great win, since I care a lot about our fund distributions getting better.
I think there is something going on in this comment that I wouldn’t put in the category of “outside view”. Instead I would put it in the category of “perceiving something as intuitively weird, and reacting to it”.
I think weirdness is overall a pretty bad predictor of impact, both in the positive and negative direction. I think it’s a good emotion to pay attention to, because often you can learn valuable things from it, but I think it only sometimes tends to give rise to real arguments in favor or against an idea.
It is also very susceptible to framing effects. The comment above says “$39,000 to make unsuccessful youtube videos”. That sure sounds naive and weird, but the whole argument relies on the word “unsuccessful” which is a pure framing device and fully unsubstantiated.
And, even though I think weirdness is only a mediocre predictor of impact, I am quite confident that the degree to which a grant or a grantee is perceived as intuitively weird by broad societal standards, is still by far the biggest predictor of whether your project can receive a grant from any major EA granting body (I don’t think this is necessarily the fault of the granting bodies, but is instead a result of a variety of complicated social incentives that force their hand most of the time).
I think this has an incredibly negative effect on the ability of the Effective Altruism community to make progress on any of the big problems we care about, and I really don’t think we want to push further in that direction.
I think you want to pay attention to whether you perceive something as weird, but I don’t think that feeling should be among your top considerations when evaluating an idea or project, and I think right now it is usually the single biggest consideration in most discourse.
After chatting with you about this via PMs, I think you aren’t necessarily making that mistake, since I think you do emphasize that there are many arguments that could convince you that something weird is still a good idea.
I think in particular it is important that “something being perceived as weird is definitely not sufficient reason to dismiss it as an effective intervention” to be common knowledge and part of public discourse. As well as “if someone is doing something that looks weird to me, without me having thought much about it or asked them much about their reasons for doing things, then that isn’t super much evidence about what they are doing being a bad idea”.
The primary thing I expect him to do with this grant is to work together with John Salvatier on doing research on skill transfer between experts (which I am partially excited about because that’s the kind of thing that I see a lot of world-scale model building and associated grant-making being bottlenecked on).
However, as I mentioned in the review, if he finds that he can’t contribute to that as effectively as he thought, I want him to feel comfortable pursuing other research avenues. I don’t currently have a short-list of what those would be, but would probably just talk with him about what research directions I would be excited about, if he decides to not collaborate with John. One of the research projects he suggested was related to studying historical social movements and some broader issues around societal coordination mechanisms that seemed decent.
I primarily know about the work he has so far produced with John Salvatier, and also know that he demonstrated general competence in a variety of other projects, including making money managing a small independent hedge fund, running a research project for the Democracy Defense Fund, doing some research at Brown university, and participating in some forecasting tournaments and scoring well.
Hmm, I guess it depends a bit on how you view this.
If you model this in terms of “total financial resources going to EA-aligned people”, then the correct calculation is ($150k * 1.5) plus whatever CEA loses in taxes for 1.5 employees.
If you want to model it as “money controlled directly by EA institutions” then it’s closer to your number.
I think the first model makes more sense, which does still suggest a lower number than what I gave above, so I will update.
Ah, yes. The second one. Will update.
Hmm, so my model is that the books are given out without significant EA affiliation, together with a pamphlet for SPARC and ESPR. I also know that HPMoR is already relatively widely known among math olympiad participants. Those together suggest that it’s unlikely this would cause much reputational damage to the EA community, given that none of this contains an explicit reference to the EA community (and shouldn’t, as I have argued below).
The outcome might be that some people might start disliking HPMoR, but that doesn’t seem super bad and of relatively little downside. Maybe some people will start disliking CFAR, though I think CFAR on net benefits a lot more from having additional people who are highly enthusiastic about it, than it suffers from people who kind-of dislike it.
I have some vague feeling that there might be some more weird downstream effects of this, but I don’t think I have any concrete models of how they might happen, and would be interested in hearing more of people’s concerns.
Could you say a bit more about what kind of PR and reputational risks you are imagining? Given that the grant is done in collaboration with the IMO and EGMO organizers, who seem to have read the book themselves and seem to be excited about giving it out as a prize, I don’t think I understand what kind of reputational risks you are worried about.
Here is my rough fermi:
My guess is that there is about one full-time person working on the logistics of EA Grants, together with about half of another person lost in overhead, communications, technology (EA Funds platform) and needing to manage them.
Since people’s competence is generally high, I estimated the counterfactual earnings of that person at around $150k, with an additional salary from CEA of $60k that is presumably taxed at around 30%, resulting in a total loss of money going to EA-aligned people of around ($150k + 0.3 * $60k) * 1.5 = $252k per year [Edit: Updated wrong calculation]. EA Funds has made less than 100 grants a year, so a total of about $2k - $3k per grant in overhead seems reasonable.
+ 0.3 * $60k) * 1.5 = $252k
To be clear, this is average overhead. Presumably marginal overhead is smaller than average overhead, though I am not sure by how much. I randomly guessed it would be about 50%, resulting in something around $1k to $2k overhead.
Sorry for the delay, others seem to have given a lot of good responses in the meantime, but here is my current summary of those concerns:
1. Ideally, yes. If there is a lack of externally transparent evidence, there should be strong reasoning in favor of the grant.
By word-count the HPMOR writeup is (I think) among the three longest writeups that I produced for this round of grant proposals. I think my reasoning is sufficiently strong, though it is obviously difficult for me to comprehensively explain all of my background models and reasoning in a way that allows you to verify that.
The core arguments that I provided in the writeup above seem sufficiently strong to me, not necessarily to convince a completely independent observer, but I think for someone with context about community building and general work done on the long-term future, I expect it to successfully communicate the actual reasons for why I think the grant is a good idea.
I generally think grantmakers should give grants to whatever interventions they think are likely to be most effective, while not constraining themselves to only account for evidence that is easily communicable to other people. They then should also invest significant resources into communicating whatever can be communicated about their reasons and intuitions and actively seek out counterarguments and additional evidence that would change their mind.
2. I think that there is no evidence that using $28k to purchase copies of HPMOR is the most cost-effective way to encourage Math Olympiad participants to work on the long-term future or engage with the existing community. I don’t make the claim that it won’t be effective at all. Simply that there is little reason to believe it will be more effective, either in an absolute sense or in a cost-effectiveness sense, than other resources.
This one has mostly been answered by other people in the thread, but here is my rough summary of my thoughts on this objection:
I don’t think the aim of this grant should be “to recruit IMO and EGMO winners into the EA community”. I think membership in the EA community is of relatively minor importance compared to helping them get traction in thinking about the long-term-future, teach them about basic thinking tools and give them opportunities to talk to others who have similar interests.
I think from an integrity perspective it would be actively bad to try to persuade young high-school students to join the community. HPMoR is a good book to give because some of the IMO and EGMO organizers have read the book and found it interesting on its own merit, and would be glad to receive it as a gift. I don’t think any of the other books you proposed would be received in the same way and I think are much more likely to be received as advocacy material that is trying to recruit them to some kind of in-group.
Jan’s comment summarized the concerns I have here reasonably well.
As Misha said, this grant is possible because the IMO and EGMO organizers are excited about giving out HPMoRs as prizes. It is not logistically feasible to give out other material that the organizers are not excited about (and I would be much less excited about a grant that would not go through the organizers of these events)
As Ben Pace said, I think HPMoR teaches skills that math olympiad winners lack. I am confident of this both because I have participated in SPARC events that tried to teach those skills to math olympiad winners, and because impact via intellectual progress is very heavy-tailed and the absolutely best people tend to have a massively outsized impact with their contributions. Improving the reasoning and judgement ability of some of the best people on the planet strikes me as quite valuable.
3. I’m not sure about this, but this was the impression the forum post gave me. If this is not the case, then, as I said, this grant displaces some other $28k in funding. What will that other $28k go to?
Misha responded to this. There is no $28k that this grant is displacing, the counterfactual is likely that there simply wouldn’t be any books given out at IMO or EGMO. All the organizers did was to ask whether they would be able to give out prizes, conditional on them finding someone to sponsor them. I don’t see any problems with this.
4. Not necessarily that risky funds shouldn’t be recommended as go-to, although that would be one way of resolving the issue. My main problem is that it is not abundantly clear that the Funds often make risky grants, so there is a lack of transparency for an EA newcomer. And while this particularly applies to the Long Term fund, given it is harder to have evidence concerning the Long Term, it does apply to all the other funds.
My guess is that most of our donors would prefer us to feel comfortable making risky grants, but I am not confident of this. Our grant page does list the following under the section of: “Why might you choose to not donate to this fund?”
First, donors who prefer to support established organizations. The fund managers have a track record of funding newer organizations and this trend is likely to continue, provided that promising opportunities continue to exist.
This is the first and top reason we list why someone might not want to donate to this fund. This doesn’t necessarily directly translate into risky grants, but I think does communicate that we are trying to identify early-stage opportunities that are not necessarily associated with proven interventions and strong track-records.
From a communication perspective, one of the top reasons why I invested so much time into this grant writeup is to be transparent about what kind of intervention we are likely to fund, and to help donors decide whether they want to donate to this fund. At least I will continue advocating for early-stage and potentially weird looking grants as long as I am part of the LTF-board and donors should know about that. If you have any specific proposed wording, I am also open to suggesting to the rest of the fund-team that we should update our fund-page with that wording.
1. “Why give CFAR such a large grant at all, given that you seem to have a lot of concerns about their future”
I am overall still quite positive on CFAR. I have significant concerns, but the total impact CFAR had over the course of its existence strikes me as very large and easily worth the resources it has taken up so far.
I don’t think it’s the correct choice for CFAR to take irreversible action right now because they correctly decided to not run a fall fundraiser, and I still assign significant probability to CFAR actually being on the right track to continue having a large impact. My model here is mostly that whatever allowed CFAR to have a historical impact did not break, and so will continue producing value of the same type.
2. “Why not give CFAR a grant that is conditional on some kind of change in the organization?”
I considered this for quite a while, but ultimately decided against it. I think grantmakers should generally be very hesitant to make earmarked or conditional grants to organizations, without knowing the way that organization operates in close detail. Some things that might seem easy to change from the outside often turn out to be really hard to change for good reasons, and this also has the potential to create a kind of adversarial relationship where the organization is incentivized to do the minimum amount of effort necessary to meet the conditions of the grant, which I think tends to make transparency a lot harder.
Overall, I much more strongly prefer to recommend unconditional grants with concrete suggestions for what changes would cause future unconditional grants to be made to the organization, while communicating clearly what kind of long-term performance metrics or considerations would cause me to change my mind.
I expect to communicate extensively with CFAR over the coming weeks, talk to most of its staff members, generally get a better sense of how CFAR operates and think about the big-picture effects that CFAR has on the long-term future and global catastrophic risk. I think I am likely to then either:
make recommendations for a set of changes with conditional funding,
decide that CFAR does not require further funding from the LTF,
or be convinced that CFAR’s current plans make sense and that they should have sufficient resources to execute those plans.
Here is a rough summary of the process, it’s hard to explain spreadsheets in words so this might end up sounding a bit confusing:
We added all the applications to a big spreadsheet, with a column for each fund member and advisor (Nick Beckstead and Jonas Vollmer) in which they would be encouraged to assign a number from −5 to +5 for each application
Then there was a period in which everyone individually and mostly independently reviewed each grant, abstaining if they had a conflict of interest, or voting positively or negatively if they thought the grant was a good or a bad idea
We then had a number of video-chat meetings in which we tried to go through all the grants that had at least one person who thought the grant was a good idea and had pretty extensive discussions about those grants. During those meetings we also agreed on next actions for follows ups, scheduling meetings with some of the potential grantees, reaching out to references etc. the results of which we would then discuss at the next all-hands meeting
Interspersed with the all-hands meetings I also had a lot of 1-on-1 meetings (with both other fund-members and grantees) in which I worked in detail through some of the grants with the other person, and hashed out deeper disagreements we had about some of the grants (like whether certain causes and approaches are likely to work at all, how much we should make grants to individuals, etc.)
As a result of these meetings there was significant updating of the votes everyone had on each grant, with almost every grant we made having at least two relatively strong supporters and having a total score of above 3 in aggregate votes
However, some fund members weren’t super happy about this process and I also think that this process encouraged too much consensus-based decision making by making a lot of the grants with the highest vote scores grants that everyone thought were vaguely a good idea, but nobody was necessarily strongly excited about.
We then revamped our process towards the latter half of the one-month review period and experimented with a new spreadsheet that allowed each individual fund member to suggest grant allocations for 15% and 45% of our total available budget. In the absence of a veto from another fund member, grants in the 15% category would be made mostly on the discretion of the individual fund member, and we would add up grant allocations from the 45% budget until we ran out of our allocated budget.
Both processes actually resulted in roughly the same grant allocation, with one additional grant being made under the second allocation method and one grant not making the cut. We ended up going with the second allocation method.