Love this.
I think there’s a meme that high impact careers goes something like: “learn about EA → get involved in EA → get a high impact job”, while for many (most?) people the trajectory is more like “learn about EA → get involved in EA → work in something unrelated to EA and feel disillusioned”.
Your post hopefully helps fix this misunderstanding.
ezrah
When is the survey going to be open until? Asking as a group organizer who is planning how to get members to fill in the survey
Animal welfare is just so much more neglected, relative to the scale.
However, I don’t go all the way to a strong agree since I think the evidence base is weaker and am less certain of finding good interventions; along with a stronger sense of moral responsibility towards humans; along with a bigger “sentience discount” than other moral comparisons between humans and non-human animals.
John Cochrane on why regulation is the wrong tool for AI Safety
What types of influence do you think governments from small, low influence countries will be able to have?
For example, the NZ government—aren’t they price-takers when it comes to AI regulation? If you’re not a significant player, don’t have significant resources to commit to the problem, and don’t have any national GenAI companies—how will they influence the development trajectory of AI?
My sense is that what’s happening here is that small countries have more cohesive communities, and therefore a larger % of the EA community answers the survey.
Inspiring and touching, thank you for sharing
Wishing you both the best of health going forward
Edit: it seems like this already exists! @Aaron Bergman can you confirm?
Can someone who runs an EA podcast please convert recorded EAG talks to podcast form, so that more people can listen to them? @80000_Hours @hearthisidea @Kat Woods @EA Global (please tag other podcasters in the comments)
The CEA events team seem open to this, but don’t have the podcasting expertise or the bandwidth to start a new podcast
(Full disclosure—this is a bit of a selfish ask, I’m attending EAG and want to listen to quite a few talks that I don’t have time for, and streaming them on YouTube seems clunky and not great for driving)
Thanks Ollie, makes a lot of sense
Very cool! Good luck!
Can I ask why you chose to run a summit, instead of an EAGxBrazil?
Loved it as well
Meetup for EA Israel members at EAG London 2024
Very interesting!
Thanks for the writeup
I’d be very interested in seeing a continuation in regards to outcomes (maybe career changes could be a proxy for impact?)
Also, curious how you would think about the added value of a career call or participation in a program? Given that a person made a career change, obviously the career call with 80k isn’t 100% responsible for the change, but probably not 0% either (if the call was successful).
Please advertise applications at least 4 weeks before closing! (more for fellowships!)
I’ve seen a lot of cool job postings, fellowships, or other opportunities that post that applications are open the forum or on 80k ~10 days before closing.
Because many EA roles or opportunities often get cross-posted to other platforms or newsletters, and there’s a built in lag-time between the original post and the secondary platform, this is especially relevant to EA. For fellowships or similar training programs, where so much work has gone into planning and designing the program ahead of time, I would really encourage to open applications ~2 months before closing. Keep in mind that most forum posts don’t stay on the frontpage very long, so “posting something on the forum” does not equal “the EA community has seen this”.As someone who runs a local group and a newsletter, opportunities with short application times are almost always missed by my community, since there’s not enough turnaround time between when we see the original post, the next newsletter, and time for community members to apply.
ezrah’s Quick takes
Rashi on “categories of labor”—although some learners of the EA Talmud have been known to include commenting on forums and debating philosophical turns of phrase within their definition of “labor”, the Mishna is making a chiddush and excluding types of “labor” that cannot be of assistance when building a large tent. Nafka mina (emerges from it) the understanding that how-to youtube videos would be categorized as labor, by a rabbinical—not biblical—decree, but longwinded comments on obscure posts are not.
Where can I see the projects that were submitted?
Great question! I realize that I really wasn’t clear, and that it probably does exist more in EA than my instinctive impression (also—great links, I hadn’t been familiar with all of them).
What I meant by leverage was more along the lines of “the value of insider’s perspective and the ability to leverage individual networks and skill sets”. In these cases, Nick was able to identify potential cost-effective ways to save lives because of both his training and location, and SACH is able to similarly have a cost-effective program because of their close connections with a hospital. I have a few other examples as well, such as NALA’s WASH on Wheel’s program (which essentially trains a team a plumbers and provides access to clean water to hundreds of program, leveraging the existing infrastructure), and anecdotes I’ve heard about people on the ground being able to provide crucial solutions during the current Israel-Hamas crisis.
I have a sense that the classic EA (and I could very much be strawmanning here) thinks along the lines of: big problems, good solutions, niche area—but doesn’t think about who is best placed to identify or implement even better solutions that can come up because the world is messy.
After thinking about it, the “leverage” I’m referring to is probably more common than I thought, but maybe not so very well defined.
From what I understand, the per-patient treatments costs are both quite low and are given pro-bono, so given how GiveWell understands leverage (which @Mo Putera pointed out in the response below), they should be strongly discounted from the costs. The question of how to incorporate the infrastructure costs, ie—the hospital, staff training, etc—that enable the program to operate, is quite interesting, and I honestly don’t have a great idea how that fits into the model.
Speculation only:
It seems plausible to me that the value of funding “Abundance and Growth” in the USA is not measured in QALYs, but in supporting a political alternative to Trump and MAGA. The “center-left” vibes might not be a bug, but a feature.
If you think USAID cuts are important, and AI is important, and that Trump is net-negative on both of these, maybe the most impactful thing you can do is support alternative narratives to Trumpism and help ensure he doesn’t get re-elected and that you swing the house as far as possible. Of course, it needs sound bi-partisan and not in-your-face, but “Abundance and Growth” is pretty clearly at contrast to the current administrations policies.
This both explains why OP won’t publicly their reasoning for this, and why it might be as cost-effective in EV as other options.