Thanks for a great writeup, Jonas! I really liked the clear layout of the post and the link to provide anonymous feedback.
Questions I had after reading the post:
1. Itâs clear that EAF does some unique, hard-to-replace work (REG, the Zurich initiative). However, when it comes to EAFâs work around research (the planned agenda, the support for researchers), what sets it apart from other research organizations with a focus on the long-term future? What does EAF do in this area that no one else does? (Iâd guess itâs a combination of geographic location and philosophical focus, but I have a hard time clearly distinguishing the differing priorities and practices of large research orgs.)
2. Regarding your âfundraisingâ mistakes: Did you learn any lessons in the course of speaking with philanthropists that youâd be willing to share? Was there any systematic difference between conversations that were more vs. less successful?
3. It was good to see EAF research performing well in the Alignment Forum competition. Do you have any other evidence you can share showing how EAFâs work has been useful in making progress on core problems, or integrating into the overall X-risk research ecosystem?
(For someone looking to fund research, it can be really hard to tell which organizations are most reliably producing useful work, since one paper might be much more helpful/âinfluential than another in ways that wonât be clear for a long time. I donât know if thereâs any way to demonstrate research quality to non-technical people, and I wouldnât be surprised if that problem was essentially impossible.)
1. Itâs clear that EAF does some unique, hard-to-replace work (REG, the Zurich initiative). However, when it comes to EAFâs work around research (the planned agenda, the support for researchers), what sets it apart from other research organizations with a focus on the long-term future? What does EAF do in this area that no one else does? (Iâd guess itâs a combination of geographic location and philosophical focus, but I have a hard time clearly distinguishing the differing priorities and practices of large research orgs.)
Iâd say itâs just the philosophical focus, not the geographic location. In practice, this comes down to a particular focus on conflict involving AI systems. For more background, see Cause prioritization for downside-focused value systems. Our research agenda will hopefully help make this easier to understand as well.
2. Regarding your âfundraisingâ mistakes: Did you learn any lessons in the course of speaking with philanthropists that youâd be willing to share? Was there any systematic difference between conversations that were more vs. less successful?
If we could go back, weâd define the relationships more clearly from the beginning by outlining a roadmap with regular check-ins. Weâd also focus less on pitching EA and more on explaining how they could use EA to solve their specific problems.
3. It was good to see EAF research performing well in the Alignment Forum competition. Do you have any other evidence you can share showing how EAFâs work has been useful in making progress on core problems, or integrating into the overall X-risk research ecosystem?
(For someone looking to fund research, it can be really hard to tell which organizations are most reliably producing useful work, since one paper might be much more helpful/âinfluential than another in ways that wonât be clear for a long time. I donât know if thereâs any way to demonstrate research quality to non-technical people, and I wouldnât be surprised if that problem was essentially impossible.)
In terms of publicly verifiable evidence, Max Danielâs talk on s-risks was received positively on LessWrong, and GPI quoted several of our publications in their research agenda. In-person feedback by researchers at other x-risk organizations was usually positive as well.
In terms of critical feedback, others pointed out that the presentation of our research is often too long and broad, and might trigger absurdity heuristics. Weâve been working to improve our research along these lines, but itâll take some time for this to become publicly visible.
Thanks for a great writeup, Jonas! I really liked the clear layout of the post and the link to provide anonymous feedback.
Questions I had after reading the post:
1. Itâs clear that EAF does some unique, hard-to-replace work (REG, the Zurich initiative). However, when it comes to EAFâs work around research (the planned agenda, the support for researchers), what sets it apart from other research organizations with a focus on the long-term future? What does EAF do in this area that no one else does? (Iâd guess itâs a combination of geographic location and philosophical focus, but I have a hard time clearly distinguishing the differing priorities and practices of large research orgs.)
2. Regarding your âfundraisingâ mistakes: Did you learn any lessons in the course of speaking with philanthropists that youâd be willing to share? Was there any systematic difference between conversations that were more vs. less successful?
3. It was good to see EAF research performing well in the Alignment Forum competition. Do you have any other evidence you can share showing how EAFâs work has been useful in making progress on core problems, or integrating into the overall X-risk research ecosystem?
(For someone looking to fund research, it can be really hard to tell which organizations are most reliably producing useful work, since one paper might be much more helpful/âinfluential than another in ways that wonât be clear for a long time. I donât know if thereâs any way to demonstrate research quality to non-technical people, and I wouldnât be surprised if that problem was essentially impossible.)
Thanks for the questions!
Iâd say itâs just the philosophical focus, not the geographic location. In practice, this comes down to a particular focus on conflict involving AI systems. For more background, see Cause prioritization for downside-focused value systems. Our research agenda will hopefully help make this easier to understand as well.
If we could go back, weâd define the relationships more clearly from the beginning by outlining a roadmap with regular check-ins. Weâd also focus less on pitching EA and more on explaining how they could use EA to solve their specific problems.
In terms of publicly verifiable evidence, Max Danielâs talk on s-risks was received positively on LessWrong, and GPI quoted several of our publications in their research agenda. In-person feedback by researchers at other x-risk organizations was usually positive as well.
In terms of critical feedback, others pointed out that the presentation of our research is often too long and broad, and might trigger absurdity heuristics. Weâve been working to improve our research along these lines, but itâll take some time for this to become publicly visible.