I’m a doctor working towards the dream that every human will have access to high quality healthcare. I’m a medic and director of OneDay Health, which has launched 35 simple but comprehensive nurse-led health centers in remote rural Ugandan Villages. A huge thanks to the EA Cambridge student community in 2018 for helping me realise that I could do more good by focusing on providing healthcare in remote places.
NickLaing
I love the warmth, energy and enthusiasm in this post. Although it’s not everyone’s style and not the norm on the forum, its nice from time to time and was a mini pick-me up this morning.
Especially appreciated the effusive thanks for your donors, guests and supporters, feels good.
I agree—why state explicitly that you aren’t recruiting mid or late career people?
Even if its not your priority, why not say something like “we intend to focus our efforts to bring new people into the community on students (especially at top universities) and young professionals. At the same time we will encourage people of all ages and backgrounds to join the community while offering targeted support to some high impact opportunities to get experienced mid/late career professionals on board”
Or something.
I completely agree, using less money in our everyday lives is a huge Factor here which can easily be ignored. I also ask people to consider their generational wealth. Many people are set to inherit a lot of money with high likelihood but don’t factor that into their saving/giving plans which seems weird to me at least with an EA framework. For example If we have a 80 percent chance of inheriting 500,000 Dollars then you can afford to save a lot less than someone who won’t inherit anything.
As a side note personally I find saving very much money quite hard to justify morally, especially if you have a solid safety net with government/family/friends but that’s a whole nother discussion!
I’m the same, have no idea how to put it on the forum.
I’ll add this might well be a very doable committment as given this GiveDirectly donation and others of his, he might be already doing it or pretty close...
Strong upvote for the attempted mirth—I think I’m one of the few that appreciates it around here :D.
That’s a nice one Allen- there is the “changed my mind” reaction already which gets some of the way there.
This is a great one
Sure but 10x seems a weird place to start, surely start with “more cost effective” before applying arbitrary multipliers...
I think this is a good topic, but including the word “far” kind of ruins the debate from the start as it seems like the person positing it may already have made up their mind and it introduces unnecessary bias.
I think that would get different responses, but I don’t think it is quite as good a thought experiment because it probably doesn’t approximate the likely future as well as the other thought experiment.
Future AI is likely to look like nothing at all (in a computer), or look like a non-human entity, perhaps a robot. I agree that an entity looking and acting basically exactly the same as a human would illicit very different emotional and practical responses from us, but it seems very unlikely that is how we encounter early-stage future “non-conscious” AGI or however we want to call it.
That may be true that the real life reaction may be different from what is sad, but I can’t test it and like you said it could swing both ways. I think its good as much as possible to test what can be tested and lean towards evidence over theory, even when the evidence isn’t great. Maybe this is a bit of a difference between the inclinations of us GHD folks and AI safety folks too.
I love the accessible way you wrote this article thanks and I loved that thought experiment. I’m going to test it on more people tomorrow.… I’m interested in this statement
”Whatever you feel about this thought experiment, I believe that most people in that situation would feel compelled to grant the robots basic rights.” I would like to actually poll this because I’m not entirely sure its true, I’d be 50⁄50 on what the general public would think. My instinct was that I’m fine dissecting the robot, and I told the story to my wife and she was all good with it too.
There’s an episode in the series “The Good Life” where a similar-ish thing happens and they “kill” a “robot” called Janet several times. The people I watched that with weren’t that perturbed at the time.
Good times with anecdata. I could well just be hanging out with edgeish of the bell-curve kind of people as well.
IMO Sentience be Sentience. I’m more compelled by the argument that there’s a high chance we won’t be able to figure out if its there or not.
This is a little horrifying to hear put this starkly, but makes perfect sense.
BTW I have no problem at all believing that music festivals are funded, GIZ for example fund all kinds of strange “events” here in Uganda without much of a theory of change.
I would pitch for every 2 months, but I like the sentiment of doing it a bit more.
This is a great point—I wouldn’t call it “most” neglected but I find it bizarre that many farmers even those with a bit of capital don’t use basic machines that could even double their yield or halve their labour.
I think there are a lot of solutions already out there already though, some even from 100 years ago that could probably be used more.
Thanks Jason, we clearly have different bars but you make a good point. I would consider 10-20 priorities fine. I will adjust up to 2% based on this.
I feel like 5% of EA directed funding is a high bar to clear to agree with the statement ““AI welfare should be an EA priority”. I would have maybe pitched for maybe
1%2% as the “priority” bar, which would still be 10 million dollars a year even under quite conservative assumptions as to what would be considered unrestricted EA funding.
This would mean that across all domains (X-risk, animal welfare, GHD) a theoretical maximum of 20 causes, more realistically maybe 5-15 causes (assuming some causes warrant 10-30% of funding) would be considered EA Priorities. 80,000 hours doesn’t have AI welfare in their top 8 causes but it is in their top 16, so I doubt it would clear the “5%” bar, even though they list it under their “Similarly pressing but less developed areas”, which feels priorityish to me (perhaos they could share their perspective?)
It could also depend how broadly we characterise causes. Is “Global Health and development” one cause, or are Mosquito nets, deworming and cash transfers all their own causes? I would suspect the latter.
Many people could therefore consider AI welfare an important cause area in their eyes but disagree with the debate statement because they don’t think it warrants a large 5%+ of EA funding despite its importance.
Or I could be wrong and many could consider 5% a reasonable or even low bar. Its clearly a subjective question and not the biggest deal but hey :D.
This is a wonderful post, basically everything you’ve said here makes sense and lines up with my (limited) experience of those in control of the aid dollars.
This sent shivers down my spine a little. I too feel there is a new wave of “Holistic” aid vibes coming through at the moment, but I’m surprised its as prevalent as you saw at the conference.
”Seemingly every talk had brought up how their NGO was adopting a new holistic approach to aid, each featuring six new buzzwords and a curious lack of measurement.”Here’s three gems of wisdom I especially appreciated.
”What is left is fought over by hundreds, if not thousands of NGOs all looking for funding. I can’t think of any other government budget with as many entities fighting over as small a budget.
”In some sense, ‘improving the cost-effectiveness of aid’ is not really an intervention any more than ‘improving public health in Africa’ is.
”But if someone wanted to start a project similar to ours because the EV looks really good on paper, I am less optimistic about their chances.”
I have one nagging question here (no pressure to answer) - do you think 18 months was long enough to really give a good go at a project like this? So much of policy/lobbying work is about relationships, and I would doubt that anyone really could build the strength of relationships that might be needed to move the needle. And you did have some success in that short time as well. I wasn’t completely clear how much of the decision to shutdown was about results/funding vs. your energy levels and optimism.
Very minor point, in defence of African countries where there might be decent-ish democracy. Depending on your criteria, Kenya might “count” here.
”I asked GPT-4 to list democracies in which a major candidate refused to accept defeat in a national election. GPT-4 was unable to list any democracy other than the US. (Instead, it misunderstood the question and included countries like Kenya, Venezuela and Belarus, which obviously don’t count).”