This is cool! Good luck on the program
Heramb Podar
This post is so valuable; I remember flinching and trying to “save” my call for multiple months until a friend at an EA fellowship literally told me, ” You do know that they give you the stuff to prep with if you are accepted, right?”—I applied the very same night and probably thought about some aspect of my call nearly every other week of my summer intern.
What are the biggest bottlenecks and/or inefficiencies that impedes 80K from having more impact?
I have seen way too many people not wanting to apply to 80K hours calls because they aren’t EAs or won’t to work in x-risk areas. It almost seems like the message is “80K is an EA-aligned only service.”
How is the team approaching this (changes in messaging, for eg?)
How much would you want people weight 80k hours calls into their overall decision-making? (approximate ranges or examples is fine)
How often do you direct someone away from AI Safety to work on something else (say global health and development)?
What kind of criteria or plans do you look for in people who are junior in the AI governance field and looking for independent research grants? Is this a kind of application you would want to see more of?
Really nice post, I think that a lot of EA doesn’t appeal to people from all backgrounds(especially global south countries)at the same level of enthusiasm which is a real shame given that a lot of good which can be done in the world right now in the most cost-effective way is in those very backgrounds
I definitely agree that the current situation of silence makes the overall runaway, fast, dirty AI development scenario much more likely and the space much tenser.
Additionally, there might just be concerns that these labs have thought of from a business or at-scale research point of view, which we haven’t (this would really help an already strained, resource-scarce alignment field in terms of what to prioritize!).
Ultimately, I think what is stopping these labs is PR and a sense of “tainting their own field.”
operationalize in what context or format?
Fantastic post i am delighted to read someone take a crack at an impact analysis of something abstract /reliant on 2nd order ripple effects; this has been something that has been bothering me
For EA s starting out, there should be some focus on just doing good and not necessarily trying to aggressively optimize for doing good better, especially if you don’t have a lot of credibility in that space.
Also, at the end of the day EA is a just a principle/value system which you can rely on in pretty much any career you end up making. The part about EA being a support system and a place to develop your values is often left out and as a result a lot of early stage exicted EAs just want to “get into ” or “get stuff” out of EA
Nice summarization post!
On the point of non-EA funders coming into the space, it’s important to consider messaging -we don’t want to come off as being alarmist, overly patronizing or too sure certain of ourselves, but rather as a constructive dialogue that builds some shared understanding of the stakes involved.
There also needs to be incentive alignment, which in the short-term might also mean collaborating with people on things that aren’t directly X-risk related, like promoting ethical AI, enhancing transparency in AI development, etc.