I’m a doctor working towards the dream that every human will have access to high quality healthcare. I’m a medic and director of OneDay Health, which has launched 53 simple but comprehensive nurse-led health centers in remote rural Ugandan Villages. A huge thanks to the EA Cambridge student community in 2018 for helping me realise that I could do more good by focusing on providing healthcare in remote places.
NickLaing
I think the answer to this question is too many branches down a tree of possible futures to meaningfully predict. What happens at multiple branch points could swing this either way. If I have time I’ll share more about what I mean.
Yes Jim that’s a good point.
1. Yes I’m in favour of wildlife conservation and have donated towards it. But I’m still extremely uncertain about wild animal welfare, so hardly confident enough to be “all-in” on it
2. I’m not at all sure whether human welfare dominates wild animal welfare. If I were to calculate based on my assumptions I imagine wild animal welfare would dominate. For reference though my welfare ranges might be orders of magnitude below RPs
Thanks that’s helpful
From what Daniel said I thought his median was 2028 when he started to write it? But that’s perhaps a bit nitpicky.
I think there might be a wider EA/Rationalist Comms issue here when communicating with the general public. Communicating projects like this isn’t just about whether it “feels” fine—I think its important to think about how it might come accross and future implications. To the general public, this scenario even in 2030 still feels mega-soon and sci-fi. The problem is if we go past 2027 now, many people will say “those tech-bro idiots they’re always wrong” and might miss the point o the thing
If anything I think here picking a more conservative, tail end of the timeline (2028-2030) would have been better, to keep it relevant for longer.
I agree not the biggest deal though.
This is a brilliant summary of the situation. I actually find a straightforward list of bullets like this more compelling and easier to understand than something like Yudowsky’s book.
Thanks appreciate that a lot :)
For the record my vote is for cG.
But you might struggle to control “the people” on this one, there has been as lot of “CoGi” and other variations floating around. When said out loud starting with “co” is catchier than starting with the letter “c”. Also there’s a strong associatoin between CG and computer generated? There are like 3 separate threads in the replies to your renaming post discussing possible shortenings, and I think all suggestions start with “co” lol.
These are the important things which define organizations.
As for me I I will respect cG’s wishes ;).
Yeah I tihnk that’s soemthing like the approach Toby and I were discussing!
I’m not sure I can get away with that? I would say for over 90% of people 3 numbers would add even more confusion than 2. The SAT example is encouraging, although Americans make up a small proportion of my friends and acquaintances.
The concreteness is fine makes sense for sure
Isn’t then somewhere between 2028 and 2031 really “things go roughly as expected” and 2027 is “things go faster than expected if every AI improvement rolls out without roadblocks?” I feel like if you’re going to put something out there in the public sphere as a leader in AI, a bit of timeline conservatism might be prudent. Not the biggest deal though I suppose
Thanks Toby interesting one on the communication. For policy makers I think that communcation style can work OK, less so with my friends haha.
I’m still confused by why they picked 2027 even in 2025. Back when they made it, Daniel’s median forecast was 2028 and Eli’s 2031. Surely you then pick 2029 or 2030 for your scenario? Picking the “most likely year for it to happen” still feels a bit disingenous to me.
I found this super helpful thank you, probably the best thing I’ve read about AI timelines in the last year actually. So so well communicated with small words and minimal jargon thank you!
I know you’re mainly taking about the best thinking approach here, but how does this translate to communication about AI timelines? Distributions make a lot of sense to me but are very hard for most people to think in. This wouldn’t be useful to communicate with for most of my friends, unless I maybe had an hour and a large napkin… I wonder if there is a way to communicate in a “distributy” like way with people who just aren’t statistically minded?
If some regular person asks me when i think the AI apocalypse is coming, what’s a good way to communicate? I don’t want to just guess a year for all the reasons you’ve stated, but a distribution won’t be understood either. In the past I’ve said something like ” I really don’t know but it could well be between 2030 and 2040″, but my impression has been this seems pathetically vague and unhelpful to most people. Any ideas on communicating AI timelines with integrity to non-statsy folks?
As a side note it seems strange that the guy who wrote the AI 2027 story’s 50 percent point is at about 2031ish? Why wasn’t the story then AI 2031?
I would say in general major funds = money goes to major orgs, is there evidence against this? GiveWell for example gives most of its money to very big orgs. Even if the major orgs give some donations to smaller orgs, that’s usually a smlall percent of what they do.
talk to @David Nas and @Karthik Tadepalli ha. There’s increasing work within EA on development directly There are big questions around how tractable it is, how much EA influence can actually move the needle with huge money injectors active like the imf and world back, to and market forces as well.
And yeah like @Evan LaForge said to some extent development needs good health and education to happen (a bit of chicken and egg)
“think the problem is that it’s hard to establish expert “baselines” via which to measure uplift”
If you could find enough experts (say 100) then randomisation is probably enough to solve this problem even if they have a wide range of capabilities. I agree though that a category such as “2-5 years post-doc would be even nicer. Maybe could find a couple of large PHD or Post-doc cohorts.
This is one of the most inspiring things I’ve read in months. Its such a good example to have someone with a illustrius tech background like you involved in a protest like this. It might jolt some into action or at least make us think a bit harder about whether we are really morally courageous enough to do the best that we can.
I’m surprised by how much I like this, given my natural bent towards “just find organisations doing direct work”. Effective giving, talent and charity incubation are just such important areas it makes a lot of sense to me to support the highest quality of these.
I agree its fantastic, not only for Wellbeing itself, but also for disrupting the status quo. I hardly thing that even the problem of “DALYs” is solved though. Even the moral weights issue which plays into it will never be solved as such, Givewell’s piecemeal approach (which I absolutely love and think is a great way to do it) shows how tricky it is.
Yep this is a legitimate concern, its hard for new projects that aren’t being incubated through CE for sure. I think there are decent arguments for bigger funders not funding new initiatives though. I think its not the worst for friends/family/non EA funds to help starting new initiatives before official funders get involved. Also (I could be wrong) if you made a very strong argument here on the forum there might be people willing to help.
The Global Health Funding circle is another EA avenue for newer ventures :). Also Scott Alexander’s yearly giveaway is open to new ideas and they fund a bunch of GHD stuff
Love this @Arepo and i largely agree. I think there’s plenty of uncertainty and space for amateur- ish discussions about GHD stuff. Yes even taking about specific interventions it helps to have specific knowledge but mostly it’s figure-out-able for a switched on person. i would say a lot of Technical AI discussion is harder- i struggle to understand some of the threads on lesswrong!
Thanks @ElliotTep that’s all very reasonable. As a side question I was wondering what you mean by this exactly?
”I’ve spent a fair bit of time advocating for recommended default splits across cause areas based on feedback from a few Anthropic staff.”
Fair call disappearing after dropping the debate slider to avoid the upcoming bedlam...