What types of influence do you think governments from small, low influence countries will be able to have?
For example, the NZ government—aren’t they price-takers when it comes to AI regulation? If you’re not a significant player, don’t have significant resources to commit to the problem, and don’t have any national GenAI companies—how will they influence the development trajectory of AI?
ezrah
My sense is that what’s happening here is that small countries have more cohesive communities, and therefore a larger % of the EA community answers the survey.
Inspiring and touching, thank you for sharing
Wishing you both the best of health going forward
Edit: it seems like this already exists! @Aaron Bergman can you confirm?
Can someone who runs an EA podcast please convert recorded EAG talks to podcast form, so that more people can listen to them? @80000_Hours @hearthisidea @Kat Woods @EA Global (please tag other podcasters in the comments)
The CEA events team seem open to this, but don’t have the podcasting expertise or the bandwidth to start a new podcast
(Full disclosure—this is a bit of a selfish ask, I’m attending EAG and want to listen to quite a few talks that I don’t have time for, and streaming them on YouTube seems clunky and not great for driving)
Thanks Ollie, makes a lot of sense
Very cool! Good luck!
Can I ask why you chose to run a summit, instead of an EAGxBrazil?
Loved it as well
Very interesting!
Thanks for the writeup
I’d be very interested in seeing a continuation in regards to outcomes (maybe career changes could be a proxy for impact?)
Also, curious how you would think about the added value of a career call or participation in a program? Given that a person made a career change, obviously the career call with 80k isn’t 100% responsible for the change, but probably not 0% either (if the call was successful).
Please advertise applications at least 4 weeks before closing! (more for fellowships!)
I’ve seen a lot of cool job postings, fellowships, or other opportunities that post that applications are open the forum or on 80k ~10 days before closing.
Because many EA roles or opportunities often get cross-posted to other platforms or newsletters, and there’s a built in lag-time between the original post and the secondary platform, this is especially relevant to EA. For fellowships or similar training programs, where so much work has gone into planning and designing the program ahead of time, I would really encourage to open applications ~2 months before closing. Keep in mind that most forum posts don’t stay on the frontpage very long, so “posting something on the forum” does not equal “the EA community has seen this”.As someone who runs a local group and a newsletter, opportunities with short application times are almost always missed by my community, since there’s not enough turnaround time between when we see the original post, the next newsletter, and time for community members to apply.
Rashi on “categories of labor”—although some learners of the EA Talmud have been known to include commenting on forums and debating philosophical turns of phrase within their definition of “labor”, the Mishna is making a chiddush and excluding types of “labor” that cannot be of assistance when building a large tent. Nafka mina (emerges from it) the understanding that how-to youtube videos would be categorized as labor, by a rabbinical—not biblical—decree, but longwinded comments on obscure posts are not.
Where can I see the projects that were submitted?
Great question! I realize that I really wasn’t clear, and that it probably does exist more in EA than my instinctive impression (also—great links, I hadn’t been familiar with all of them).
What I meant by leverage was more along the lines of “the value of insider’s perspective and the ability to leverage individual networks and skill sets”. In these cases, Nick was able to identify potential cost-effective ways to save lives because of both his training and location, and SACH is able to similarly have a cost-effective program because of their close connections with a hospital. I have a few other examples as well, such as NALA’s WASH on Wheel’s program (which essentially trains a team a plumbers and provides access to clean water to hundreds of program, leveraging the existing infrastructure), and anecdotes I’ve heard about people on the ground being able to provide crucial solutions during the current Israel-Hamas crisis.
I have a sense that the classic EA (and I could very much be strawmanning here) thinks along the lines of: big problems, good solutions, niche area—but doesn’t think about who is best placed to identify or implement even better solutions that can come up because the world is messy.
After thinking about it, the “leverage” I’m referring to is probably more common than I thought, but maybe not so very well defined.
From what I understand, the per-patient treatments costs are both quite low and are given pro-bono, so given how GiveWell understands leverage (which @Mo Putera pointed out in the response below), they should be strongly discounted from the costs. The question of how to incorporate the infrastructure costs, ie—the hospital, staff training, etc—that enable the program to operate, is quite interesting, and I honestly don’t have a great idea how that fits into the model.
Loved this post. Like sawyer wrote—it made me emotional and made me think, and feels like a great example of what EA should be.
There actually is a non-profit I’m aware of (no affiliation) that hits a lot of the criteria mentioned in the comments—https://saveachildsheart.org/, they treat life-threatening heart disease in developing countries, often by paying for transportation to Israel where the children receive pro-bono treatment from a hospital the nonprofit has a partnership with. From a (very) quick look at their financial statements and annual report, it looks like it costs them around ~$6,300 to save a life, although that number could be significantly off in either direction (by looking through the annual report, it looks like the nonprofit is not especially focused on the most cost-effective parts of its programming, and does many activities that look like PR, which is probably morally good if it allows them to scale. On the other hand, it’s not clear from the AR what the severity of the disease is in the children treated, and what share of their treatments are actually life saving).
Your post, and nonprofits like this, make me think of something EA often misses from it’s birds-eye approach to solutions—leverage. Both you and saveachildsheart use their leverage (your proximity, their partnership with a first world medical institution) to be impressively cost-effective, but leverage is hard to spot in a-priori spreadsheets.
To everyone who replied with messages of support and wishes for a better world—thank you, I’m really glad that the EA community has people such as you, especially in such difficult times.
From what I’ve seen, peace building initiatives are more a matter of taste than proven effectiveness.
And I would wait until after the war to understand which orgs are able to effectively deliver aid to Gazans who have been affected, things will be clearer then. Now everything is complicated by the political / military situation.
Hi Ofer
Thanks for responding.
I agree with all of the facts you present in your comment! and I don’t at all think that the Israeli government is trustworthy or is trying to maximise general wellbeing, and I think that they, like most sovereign countries, value the lives of their citizens and soldiers significantly more than civilians on the other side. I don’t know if that’s good for the world, but it is how governments operate. I do think that there is effort being made to minimise civilian causalities, but I have no idea how much.
The point I was trying to make was more to caution against joining protests / building models without taking into account second-order effects or the broader context and interests of players. I think it’s quite plausible that a long term ceasefire could be better than the current policies (obviously for Gazans, but maybe even for Israelis), or that a third-option—say, creating a global coalition for sanctions and targeted killings against Hamas leadership, without widespread warfare—would be the welfare maximising option. But, as Guy points out, you need a lot of context, and I didn’t feel a need to lay out the case for them, since the ceasefire call is widespread and intuitive.
Also, there’s a (small) chance that the current Israeli policy is actually welfare maximising, which should be taken into account. I dislike the current Israeli leadership and am embarrassed that they represent me and my country, but that doesn’t mean they’re always wrong, so I try to not dismiss their positions out of hand. For context, the “no ceasefire” posution had a pretty broad support across the spectrum in Israel.
Finally—I find it hilarious that that Israelis talking about politics is being followed closely on the forum, so thanks again for your comment.
Hi!
From what I understand from conversations with SmokeFree Israel’s staff (which admittedly might be biased) is that they were the only body pushing the legislation forward, and they had to work AGAINST the existing legislation. SFI wokred to fix problematic loopholes in the update to the tobacco taxation policy that had recently been passed, and petitioned to external legal bodies to help force the government to put the policy back on the agenda. They also provided the data and expert opinions that were pivotal in the discussions within the legislature once the issue had returned to the agenda.
Regarding room for funding—that point is entirely valid. We don’t think that SFI replaces AMF or MC as a top charity that everyone should donate to, but is evidence that more highly cost-effective opportunities exist if you look for them.
To emphasize Cornelis’s point:
I’ve noticed that most of the tension that a “cause-first” model has is that it’s “cause” in the singular, and not “causes” (ie—people who join EA because of GHWB and Animal Welfare but then discover that at EAG everyone is only talking about AI). Marcus claims that EA’s success is based on cause-first, and brings examples:
”The EA community was at the forefront of pushing AI safety to the mainstream. It has started several new charities. It’s responsible for a lot of wins for animals. It’s responsible for saving hundreds of thousands of lives. It’s about the only place out there that measures charities, and does so with a lot of rigor. ”But I think that in practice, when someone today is calling for “cause-first EA”, they’re calling for “longtermist / AI safety focused EA”. The diversity of the examples above seem to support a “members-first EA” (at least as outlined in this post).
Animal welfare is just so much more neglected, relative to the scale.
However, I don’t go all the way to a strong agree since I think the evidence base is weaker and am less certain of finding good interventions; along with a stronger sense of moral responsibility towards humans; along with a bigger “sentience discount” than other moral comparisons between humans and non-human animals.