are planning another set in February 2019
are planning another set in February 2019
Will this be an open round, and if so, where can I direct people with promising applications?
With respect to crypto / blockchain fundraising, EAF has considered launching something in that space, though we haven’t launched, and it’s currently deprioritized (see https://smart-giving.org/ for a mock-up). Get in touch if you’d like to work on this!
Great post, upvoted!
Great update! I’m really happy about the new table of contents (and the automatic anchors that allow linking directly to specific sections).
Thanks! I see the point better now. While I don’t fully agree with everything, I think it could make sense to rename the fund if/once we have a good idea.
why should they donate to your fund rather than another EA fund if they don’t understand the basic goal you are aiming for with your fund?
We always point out that the fund is focused on reducing suffering in the long-term future.
Also, why should they donate to that other fund instead? E.g., the Long-Term Future Fund is also importantly motivated by “astronomical waste” type considerations which those donors don’t understand either, and might not agree with.
I can imagine that people donate to the EAF fund for social reasons (e.g. you happen to be well-connected to poker players) more than intellectual reasons (i.e. funders donate because they prioritize s-risk reduction).
Yeah. This will always be the case with many donors, regardless of which fund they donate to.
Of course, this is part of a larger coordination problem in which all kinds of non-intellectual reasons are driving donation decisions.
I wouldn’t call it a coordination problem in the game-theoretic sense, and I think in many cases this actually isn’t even a problem: I think it’s important that donors aren’t deceived into supporting something that they wouldn’t want to support; but in the many cases where donors don’t have informed opinions (e.g., on population ethics), it’s fine if you fill in the details for them with a plausible view held by a significant part of the community.
Perhaps it should be a best practice for EA fundraisers to always recommend funders to all go through a (to be created) donation decision tool that takes them through some of the relevant questions.
I think we’d be open to doing something like this.
It seems plausible that people who earn on the very high end of the spectrum might not have filled in the survey due to time constraints (selection bias).
Kerry Vaughan talks about his experience with EA Ventures in this panel discussion: https://youtu.be/Y4YrmltF2I0?t=169
Thanks for the questions!
1. It’s clear that EAF does some unique, hard-to-replace work (REG, the Zurich initiative). However, when it comes to EAF’s work around research (the planned agenda, the support for researchers), what sets it apart from other research organizations with a focus on the long-term future? What does EAF do in this area that no one else does? (I’d guess it’s a combination of geographic location and philosophical focus, but I have a hard time clearly distinguishing the differing priorities and practices of large research orgs.)
I’d say it’s just the philosophical focus, not the geographic location. In practice, this comes down to a particular focus on conflict involving AI systems. For more background, see Cause prioritization for downside-focused value systems. Our research agenda will hopefully help make this easier to understand as well.
2. Regarding your “fundraising” mistakes: Did you learn any lessons in the course of speaking with philanthropists that you’d be willing to share? Was there any systematic difference between conversations that were more vs. less successful?
If we could go back, we’d define the relationships more clearly from the beginning by outlining a roadmap with regular check-ins. We’d also focus less on pitching EA and more on explaining how they could use EA to solve their specific problems.
3. It was good to see EAF research performing well in the Alignment Forum competition. Do you have any other evidence you can share showing how EAF’s work has been useful in making progress on core problems, or integrating into the overall X-risk research ecosystem?
(For someone looking to fund research, it can be really hard to tell which organizations are most reliably producing useful work, since one paper might be much more helpful/influential than another in ways that won’t be clear for a long time. I don’t know if there’s any way to demonstrate research quality to non-technical people, and I wouldn’t be surprised if that problem was essentially impossible.)
In terms of publicly verifiable evidence, Max Daniel’s talk on s-risks was received positively on LessWrong, and GPI quoted several of our publications in their research agenda. In-person feedback by researchers at other x-risk organizations was usually positive as well.
In terms of critical feedback, others pointed out that the presentation of our research is often too long and broad, and might trigger absurdity heuristics. We’ve been working to improve our research along these lines, but it’ll take some time for this to become publicly visible.
Thanks so much for publishing the videos and transcripts!
Some quick suggestions:
Add the official title of the talk. Instead of “Scott Garrabrant’s Talk”, write “Scott Garrabrant: Goodhart’s Law”. This allows readers to determine more quickly whether to read a transcript.
Add a brief speaker bio (potentially including links to their and their employers’ websites) at the top of the transcripts.
Adopt a consistent naming scheme for YouTube talk titles. For most talks, this should be something like “Speaker: Topic (Name of conference)”.
I’d find it very useful if these videos were published more quickly, e.g., within 3 months of a conference.
Thanks for the feedback! I think part of the challenge is that the name also needs to be fairly short and easy to remember. “Long-Term Future Fund” is already a bit long and hard to remember (people often seem to get it wrong), so I’m nervous about making it even longer. We seriously considered “S-risk Fund” but ultimately decided against because it seems harder to fundraise for from people who are less familiar with advanced EA concepts (e.g., poker pros interested in improving the long-term future). Also, most people who understand the idea of s-risks will also know that EAF works on them.
I’d be curious to hear whether the above points were convincing, or whether you’d still perceive it as suboptimal.
1) The Long-Term Future Fund seems most important to coordinate with. Since I’m both a fund manager at the EAF Fund and an advisor to the Long-Term Future Fund, I hope to facilitate such coordination.
2) Individual EA donors, poker pros (through our current matching challenge), and maybe other large donors.
3) Yes, that sounds correct. We’re particularly excited to support researchers who work on specific s-risk-related questions within those areas, but I expect that the research we fund could also positively influence AI in other ways (e.g. much of the decision theory work might make positive-sum trade more likely and could thereby increase the chance of realizing the best possible outcomes). We might also fund established organizations like MIRI if they have room for more funding.
I think Stefan’s (and my) idea is to do the reference checks slightly earlier, e.g. at the point when deciding whether to offer a trial, but not in the first rounds of the application process. At that point, the expected benefit is almost as high as it is at the very end of the process, and thus probably worth the cost.
This avoids having to ask for references very early in the application process, but has the additional benefit of potentially improving the decision whether to invite someone to a trial a lot, thereby saving applicants and employers a lot of time and energy (in expectation).
I think this post sounds great and I don’t see any issues with the content.
I still wanted to flag briefly that in personal conversation, a few members of the community have voiced some worries about this project and the team to me, so I’d recommend investigating these further before investing large amounts. These worries might be partly or fully out of date, and may not be substantiated upon further investigation, but I thought given the nature of the project and what’s at stake for other organizations/individuals, it may be worth mentioning.
(Personal opinion, not my employer’s.)
What would be the next steps funders could take if they would like to support this type of work?
There is this list of essential EA resources: https://www.effectivealtruism.org/resources/
Thanks, makes sense! Would be great to see such data in the future, though I agree it seems hard to track.
Right, when I wrote “career plan changes” I mostly meant that they end up studying a subject different from their previous best guess (if they had one) at least partly for EA reasons. (Or at a different university, e.g. a top school.)
Have you tried / considered tracking career plan changes, and if so, do you have any tentative results you could share? (If not, what’s your reasoning for not focusing on this more?)