1. It’s clear that EAF does some unique, hard-to-replace work (REG, the Zurich initiative). However, when it comes to EAF’s work around research (the planned agenda, the support for researchers), what sets it apart from other research organizations with a focus on the long-term future? What does EAF do in this area that no one else does? (I’d guess it’s a combination of geographic location and philosophical focus, but I have a hard time clearly distinguishing the differing priorities and practices of large research orgs.)
I’d say it’s just the philosophical focus, not the geographic location. In practice, this comes down to a particular focus on conflict involving AI systems. For more background, see Cause prioritization for downside-focused value systems. Our research agenda will hopefully help make this easier to understand as well.
2. Regarding your “fundraising” mistakes: Did you learn any lessons in the course of speaking with philanthropists that you’d be willing to share? Was there any systematic difference between conversations that were more vs. less successful?
If we could go back, we’d define the relationships more clearly from the beginning by outlining a roadmap with regular check-ins. We’d also focus less on pitching EA and more on explaining how they could use EA to solve their specific problems.
3. It was good to see EAF research performing well in the Alignment Forum competition. Do you have any other evidence you can share showing how EAF’s work has been useful in making progress on core problems, or integrating into the overall X-risk research ecosystem?
(For someone looking to fund research, it can be really hard to tell which organizations are most reliably producing useful work, since one paper might be much more helpful/influential than another in ways that won’t be clear for a long time. I don’t know if there’s any way to demonstrate research quality to non-technical people, and I wouldn’t be surprised if that problem was essentially impossible.)
In terms of publicly verifiable evidence, Max Daniel’s talk on s-risks was received positively on LessWrong, and GPI quoted several of our publications in their research agenda. In-person feedback by researchers at other x-risk organizations was usually positive as well.
In terms of critical feedback, others pointed out that the presentation of our research is often too long and broad, and might trigger absurdity heuristics. We’ve been working to improve our research along these lines, but it’ll take some time for this to become publicly visible.
Thanks for the questions!
I’d say it’s just the philosophical focus, not the geographic location. In practice, this comes down to a particular focus on conflict involving AI systems. For more background, see Cause prioritization for downside-focused value systems. Our research agenda will hopefully help make this easier to understand as well.
If we could go back, we’d define the relationships more clearly from the beginning by outlining a roadmap with regular check-ins. We’d also focus less on pitching EA and more on explaining how they could use EA to solve their specific problems.
In terms of publicly verifiable evidence, Max Daniel’s talk on s-risks was received positively on LessWrong, and GPI quoted several of our publications in their research agenda. In-person feedback by researchers at other x-risk organizations was usually positive as well.
In terms of critical feedback, others pointed out that the presentation of our research is often too long and broad, and might trigger absurdity heuristics. We’ve been working to improve our research along these lines, but it’ll take some time for this to become publicly visible.