What types of influence do you think governments from small, low influence countries will be able to have?
For example, the NZ government—aren’t they price-takers when it comes to AI regulation? If you’re not a significant player, don’t have significant resources to commit to the problem, and don’t have any national GenAI companies—how will they influence the development trajectory of AI?
ezrah
John Cochrane on why regulation is the wrong tool for AI Safety
My sense is that what’s happening here is that small countries have more cohesive communities, and therefore a larger % of the EA community answers the survey.
Inspiring and touching, thank you for sharing
Wishing you both the best of health going forward
Edit: it seems like this already exists! @Aaron Bergman can you confirm?
Can someone who runs an EA podcast please convert recorded EAG talks to podcast form, so that more people can listen to them? @80000_Hours @hearthisidea @Kat Woods @EA Global (please tag other podcasters in the comments)
The CEA events team seem open to this, but don’t have the podcasting expertise or the bandwidth to start a new podcast
(Full disclosure—this is a bit of a selfish ask, I’m attending EAG and want to listen to quite a few talks that I don’t have time for, and streaming them on YouTube seems clunky and not great for driving)
Thanks Ollie, makes a lot of sense
Very cool! Good luck!
Can I ask why you chose to run a summit, instead of an EAGxBrazil?
Loved it as well
Meetup for EA Israel members at EAG London 2024
Very interesting!
Thanks for the writeup
I’d be very interested in seeing a continuation in regards to outcomes (maybe career changes could be a proxy for impact?)
Also, curious how you would think about the added value of a career call or participation in a program? Given that a person made a career change, obviously the career call with 80k isn’t 100% responsible for the change, but probably not 0% either (if the call was successful).
Please advertise applications at least 4 weeks before closing! (more for fellowships!)
I’ve seen a lot of cool job postings, fellowships, or other opportunities that post that applications are open the forum or on 80k ~10 days before closing.
Because many EA roles or opportunities often get cross-posted to other platforms or newsletters, and there’s a built in lag-time between the original post and the secondary platform, this is especially relevant to EA. For fellowships or similar training programs, where so much work has gone into planning and designing the program ahead of time, I would really encourage to open applications ~2 months before closing. Keep in mind that most forum posts don’t stay on the frontpage very long, so “posting something on the forum” does not equal “the EA community has seen this”.As someone who runs a local group and a newsletter, opportunities with short application times are almost always missed by my community, since there’s not enough turnaround time between when we see the original post, the next newsletter, and time for community members to apply.
ezrah’s Quick takes
Rashi on “categories of labor”—although some learners of the EA Talmud have been known to include commenting on forums and debating philosophical turns of phrase within their definition of “labor”, the Mishna is making a chiddush and excluding types of “labor” that cannot be of assistance when building a large tent. Nafka mina (emerges from it) the understanding that how-to youtube videos would be categorized as labor, by a rabbinical—not biblical—decree, but longwinded comments on obscure posts are not.
Where can I see the projects that were submitted?
Great question! I realize that I really wasn’t clear, and that it probably does exist more in EA than my instinctive impression (also—great links, I hadn’t been familiar with all of them).
What I meant by leverage was more along the lines of “the value of insider’s perspective and the ability to leverage individual networks and skill sets”. In these cases, Nick was able to identify potential cost-effective ways to save lives because of both his training and location, and SACH is able to similarly have a cost-effective program because of their close connections with a hospital. I have a few other examples as well, such as NALA’s WASH on Wheel’s program (which essentially trains a team a plumbers and provides access to clean water to hundreds of program, leveraging the existing infrastructure), and anecdotes I’ve heard about people on the ground being able to provide crucial solutions during the current Israel-Hamas crisis.
I have a sense that the classic EA (and I could very much be strawmanning here) thinks along the lines of: big problems, good solutions, niche area—but doesn’t think about who is best placed to identify or implement even better solutions that can come up because the world is messy.
After thinking about it, the “leverage” I’m referring to is probably more common than I thought, but maybe not so very well defined.
From what I understand, the per-patient treatments costs are both quite low and are given pro-bono, so given how GiveWell understands leverage (which @Mo Putera pointed out in the response below), they should be strongly discounted from the costs. The question of how to incorporate the infrastructure costs, ie—the hospital, staff training, etc—that enable the program to operate, is quite interesting, and I honestly don’t have a great idea how that fits into the model.
Loved this post. Like sawyer wrote—it made me emotional and made me think, and feels like a great example of what EA should be.
There actually is a non-profit I’m aware of (no affiliation) that hits a lot of the criteria mentioned in the comments—https://saveachildsheart.org/, they treat life-threatening heart disease in developing countries, often by paying for transportation to Israel where the children receive pro-bono treatment from a hospital the nonprofit has a partnership with. From a (very) quick look at their financial statements and annual report, it looks like it costs them around ~$6,300 to save a life, although that number could be significantly off in either direction (by looking through the annual report, it looks like the nonprofit is not especially focused on the most cost-effective parts of its programming, and does many activities that look like PR, which is probably morally good if it allows them to scale. On the other hand, it’s not clear from the AR what the severity of the disease is in the children treated, and what share of their treatments are actually life saving).
Your post, and nonprofits like this, make me think of something EA often misses from it’s birds-eye approach to solutions—leverage. Both you and saveachildsheart use their leverage (your proximity, their partnership with a first world medical institution) to be impressively cost-effective, but leverage is hard to spot in a-priori spreadsheets.
To everyone who replied with messages of support and wishes for a better world—thank you, I’m really glad that the EA community has people such as you, especially in such difficult times.
From what I’ve seen, peace building initiatives are more a matter of taste than proven effectiveness.
And I would wait until after the war to understand which orgs are able to effectively deliver aid to Gazans who have been affected, things will be clearer then. Now everything is complicated by the political / military situation.
Animal welfare is just so much more neglected, relative to the scale.
However, I don’t go all the way to a strong agree since I think the evidence base is weaker and am less certain of finding good interventions; along with a stronger sense of moral responsibility towards humans; along with a bigger “sentience discount” than other moral comparisons between humans and non-human animals.