I’m a doctor working towards the dream that every human will have access to high quality healthcare. I’m a medic and director of OneDay Health, which has launched 35 simple but comprehensive nurse-led health centers in remote rural Ugandan Villages. A huge thanks to the EA Cambridge student community in 2018 for helping me realise that I could do more good by focusing on providing healthcare in remote places.
NickLaing
I would suggest the role of the US toppling Democratically elected people like Patrice Lumumba in Congo and in Iran and Guatemala may have caused at least as much suffering as the Uyghur atrocities.
Its hard to imagine anything worse than the “giant leap forward” though.
I’ve got really mixed feelings here and I don’t know which way to swing. I’m still extremely uncertain about how much we can influence aid policy especially amidst the current messy global political situation.
Sure leverage can be high, but the question is how on earth do we get that leverage? What do you suggest?
I would love to see some great posts on the forum about how we counterfactually could have spent our money or time to prevent this current mess? That might convince me that future work could be valuable too.
To state the obvious Politics is super hard to influence for many reasons including
1) huge money and effort already poured into influencing politics from many angles, so achieving acting in that environment is really difficult.
2) so much unpredictability and change over time.
CE literally started an org which was trying to do something like this, which soon shut down because they didn’t feel it was working
I think it’s easy to say that leverage is “high” for political action but there still needs to be a meaningful pathway to make that happen. Right now in the wake of trump, AID policy might be a harder needle to shift than ever before. A recently elected left wing government in the UK even just slashed aid by 40 percent—wild stuff.
There could even be a counter argument that in a world where governments are backing away from international aid and are harder to leverage, increasing EA donations and covering gaps could be more important than a few years ago.
I know EA aligned people put a lot of successful effort into helping USAID funding be more evidence based, which bore great fruits for a while but now it’s unfortunately in the dust at least for now
I’m all for political leverage and putting funding into it, but we need concrete fundable ideas different from those we have tried which didn’t work already.
Also Making up the shortfall on the ground is a great thing to do, and like your said in no way mutually exclusive from political work.
Although I think there probably are some great AI ideas that could help the world’s poorest people, it’s not easy to think of these or implement them. Usually the economies of the poorest people involve surprisingly little technology and are based around hand powered agriculture, basic goods and basic services. Internet where it is strong is often surprisingly expensive.
Y combinator startups aim to make money from the richest people in earth, people’s who’s economy and lives are tied to technology and the Internet.
So the economic reality and ecosystems are completely different and it’s hard for AI tools to penetrate super poor systems, while there’s endless ways to capture value in high income countries.
I think there may be some super valuable AI ideas that could change the game for the world’s poorest, but it’s not obvious or clear yet.
And where there are good ideas, convincing governments and NGOs to take on new ideas is super difficult. We at OneDay health have made a cool healthcare mapping tool based on AI generated population and road data which has transformed our ability to target our healthcare, and has potential to help improve healthcare on the margins in multiple ways. NGOs and government showing some interest but besides a small USAID pilot in Pakistan it has been quite hard to get uptake.
I’m a sample size one who lives in a more income context and e been thinking hard about it, but I’m struggling to come up with too maybe potentially useful AI ideas right now.
Could you maybe quote an example where orgs list “cause neutrality” as a reason for listing a wide range of causes. I completely agree with your argument it just seems unlikely these super switched on orgs would make that argument.
Love this mobilising team keep it going!
“Another mistake I hear with “marginal” is using it to refer to, not the best intervention for someone at the margin to do, but the absolute best action for anyone to do.”
This i really agree with. To add a little I think our individual competitive advantages stemming from our background, education and with history, as well as our passions and abilities are ofen heavily underrated when we talk generically about the best with we can do with our lives. I feel like our life situation can often make orders of magnitudes of differences as to our best life course on the margin, which obviously becomes more and more extreme as we get older.
I’m sure 80,000 hours and probably good help people understand this, but I do think it’s underemphasized at times.
Great follow up and love the clear, concise writing! That graph of cage free percent eggs is incredible, from 5% in 2015 to 40% - this must be one of the biggest 10 year changes in farming practises on record? What an incredible achievement. Going to share it with a bunch of people.
Maybe a weird question but what percent of that do you think EA funding/work might be responsible for? More like 1%? 10%? 20%?
I love these suggestions and have wondered about this for some time. Independent surveyors is a really good idea—not only for impact data but also for progamattic data. Although findng truly independednt surveyors is harde than you might think in relatively small NGO econsystems.
I don’t really understand what you mean by “Creating funding opportunities for third-party monitoring organisations” can you explain what you mean?
I also would have liked to see a couple more paragraphs explaining the background and reasoning, although good on you for putting up the draft rather than leaving it just in the word doc :D.
I would be something like 75% sure this is a direct consequence of Trump’s election.
Mathias can you make comments on all of my posts? Hahaha
I agree in this USAID case there are probably larger non EA specific discussion channels, although it would be nice if they’re was more public discourse here too—I suspect if this had happened 18 months ago there would have been more of a buzz on the forum about it.
I’m not sure there is another big forum outside of here in general though which hosts high quality active global health EA bent discussions, unless I’m missing something.
Oh my apologies I don’t mean downvoting sorry just engagement in general. The raid response fund has 90 upvotes yes but zero replies.
Unfortunately there’s just not so much of a global health vibe here at the moment, things seem to have swung heavily towards animal welfare and AI. I made a few comments on various threads about the USAID freeze but got very little engagement so gave up.
Thanks for this super understandable series of thoughts. Rather disturbing to read and only cements how dubious I am about anything AI CEOs or staff say.
On that note do you have any ideas how you might go about this pushing? Are you thinking more inside job work within companies, or policy work, or publc awareness and pressure? I would have thought the AI company modus operandI would have been to scoff and deny deny deny until conscious was nigh on undeniable
”we can at least push for AI companies to incorporate aspects that would clearly rule out consciousness, sentience and so on, and ensure that the companies can at least justifiably say that they have not built such things.”
Nice one man such a cool thing for you to do, will put a note on new initiatives to find drugs in the article too, should have included that originally anyway!
I am suggesting that in this case there is “there is no ethical justification for causing the death of one 73-year-old man...”
1) I still believe in the legitimacy of American democracy—I don’t it has failed yet on a large scale. Encouraging assasinating a leader democratically elected undermines the whole democracy and gives legitimacy to Trump’s supporters in possible future anti-democratic actions. The future harm caused to democracy could greatly overshadow any possible short term gain.
2) This would set a terrible precedent for the future and make justifying violence vs. leaders easier across the world. Non-violent norms towards leaders are super important to keep intact—Not just for America but the rest of the world as well.
3) There are so many other non-violent options which have not been taken to resist here, even though they seem have sadly faded into obscurity these days. Martin Luther King and co. stood against tyranny arguably worse than Trump’s through massive non-violent protests, harnessing the rightness of his position and the will of the masses to create change.
I respect approaches on this front like that of Bonhoeffer. I think political situations need to be disastrous and non-reversible through other means before these kind of extreme actions are even considered. It was many years into Hitler’s regime before Bonhoeffer even considered this kind of drastic action—we are barely a month into Trump’s.
I also disagree with this “But it’s becoming ridiculous to work on our initiatives to help climate and to fight poverty and disease and so on while we have Mr. Trump in the White House actively and vindictively making them worse far faster than we can fix them.” How is saving lives ridiculous, regardless of what others are doing? I’ll keep trying to save them on my end, and I doubt the white house can make the situation worse faster than we can fix it. USAID is a big factor, but still a small percentage global aid and development at the moment—and an even smaller percentage of cost-effective aid. Its not ideal but we can manage without it.
I’m sure there’s much more too, that’s just my top-of-head thoughts.
I love this wisdom and agree that most charities’ cost effectiveness will be less than they claim. I include our assessment of my own charity in that, and GiveWell’s assessments. Especially as causes become more saturated and less neglected. And yes like you say with Animal charities there are more assumptions made and far wider error bars than with human assessment.
I haven’t (and won’t) look into this in detail but I hope some relatively unmotivated people will compare these analysis in detail.
That might be true by your lights Vasco, but we are discussing a specific issue here (GiveWell vs. Animal Welfare confidence intervals) and I think its a bit disingenuous to bring adjacent arguments like the meat eating problem into this here.
That argument is weak to me because you could take any intervention we are clueless about and it would look better than global health interventions within most of the interval. If our interval spans zero to close to infinity then global health interventions are going to be a speck near the bottom of that interval.
“Why aren’t we publicly shaming AI researchers every day? Are we too unwilling to be negative in our pursuit of reducing the chance of doom? Why are we friendly with Anthropic? Anthropic actively accelerates the frontier, currently holds the best coding model, and explicitly aims to build AGI—yet somehow, EAs rally behind them? I’m sure almost everyone agrees that Anthropic could contribute to existential risk, so why do they get a pass? Do we think their AGI is less likely to kill everyone than that of other companies? If so, is this just another utilitarian calculus that we accept even if some worlds lead to EA engineers causing doom themselves? What is going on...”
Preface I have no skin in this game and no inside knowledge, this is just from reading the forums for a few years plus some chats.
I think you’ve put this well. Yes I think many people think Anthropic are more likely to not kill us all than other labs. Which is why you’ll still see their jobs advertised on on the forum and why big EA people like Holden Karnofsky have joined their team.
There are a lot of people that will agree with you that wee should be fighting and shaming not pandering (see pause AI), along with a lot of people who won’t. There’s certainly a (perhaps healthy?) split within the effective altrutism community between those who think we should work technically on the “inside” towards safety and those who think we should just be anti the labs.
Personally I think there’s some “sunk cost ” fallacy here. After Open Phil pumped all that money into open AI, many EAs joined safety teams of labs and there was a huge push towards getting EAs doing technical safety research. After all that it now it might feel very hard to turn around now and be anti the labs.
I also think that perhaps the general demeanor of many EAs is bent towards quiet research, policy and technical work rather than protest and loud public criticism, which pushes against that being a core EA contribution to AI safety too.