I would be curious to hear more about the reasons behind your decision to focus specifically on getting folks into GCR-related careers, rather than other common EA cause areas, if you’re happy to share!
As Effektiv Spenden is active in the field of effective donations from small-scale donors to high-networth individuals in Germany, we see our advantage mainly in helping people get into more impactful careers. For these, we broadly see the top skills and high-impact career paths 80,000 hours have identified as good guidance. We think that career impact has a heavy-tailed distribution, with most of the impact EAD will have coming from supporting a few individuals.
One important consideration is whether EAD can contribute to reducing AI x-risk. One credible worldview sees the development of transformative AI within years as likely, making the AI alignment a field that needs experienced talent very soon. Under this worldview, it seems most important to reach professionals who can be convinced to work on this problem now.
Continuing to reach out to students and people earlier in their careers to think about cause prioritisation seems more important in the worldview of longer timelines to transformative AI or if EA can contribute less to the field than in previous years. We think both worldviews are credible enough to contribute resources to them.
I would be curious to hear more about the reasons behind your decision to focus specifically on getting folks into GCR-related careers, rather than other common EA cause areas, if you’re happy to share!
As Effektiv Spenden is active in the field of effective donations from small-scale donors to high-networth individuals in Germany, we see our advantage mainly in helping people get into more impactful careers. For these, we broadly see the top skills and high-impact career paths 80,000 hours have identified as good guidance. We think that career impact has a heavy-tailed distribution, with most of the impact EAD will have coming from supporting a few individuals.
One important consideration is whether EAD can contribute to reducing AI x-risk. One credible worldview sees the development of transformative AI within years as likely, making the AI alignment a field that needs experienced talent very soon. Under this worldview, it seems most important to reach professionals who can be convinced to work on this problem now.
Continuing to reach out to students and people earlier in their careers to think about cause prioritisation seems more important in the worldview of longer timelines to transformative AI or if EA can contribute less to the field than in previous years. We think both worldviews are credible enough to contribute resources to them.