I’m a doctor working towards the dream that every human will have access to high quality healthcare. I’m a medic and director of OneDay Health, which has launched 35 simple but comprehensive nurse-led health centers in remote rural Ugandan Villages. A huge thanks to the EA Cambridge student community in 2018 for helping me realise that I could do more good by focusing on providing healthcare in remote places.
NickLaing
Thanks for this nice to see Global health stuff again on the forum (definitely a minority of posts :D ). Nice shallow investigation it was good to get a bit of an update on the situation here.
A couple of queries/criticisms
I’ve got some issues with your IDSR analysis. I basically, agree with your points 2 to 5, but point 1 feels loose with unlikely assumptions. “ISDR” is such a vague concept I think it would have been worth spending a couple of paragraphs outline what that actually is and how it worked in this test case.
“Probably much more cost-effective than it seems due to overlapping benefits of IDSR for all other diseases and preventing further epidemics. Given the per capita cost of establishing the system is $0.07, and the system may have overlapping benefits for all diseases, the intervention seems very cost-effective. Such surveillance systems may have a strong use case for antimicrobial resistance and novel pathogen detection (especially those with pandemic potential).”
I don’t understand why you make the leap to ISDR seeming “very cost-effective” t based on the low cost and potential overlap for other diseases. Having seen these kind of vague programs in Uganda, the net benefit might be close to zero, so it’s dangerous to make assumptions of any impact at all. This kind of system will never detect a new pathogen, and unless it actually changes behaviour of prescribing health professionals (unlikely), then I don’t see how it would prevent antimicrobial resistance.
Out of interest as wellwhat triggered you to focus on Bacterial Meningitis? It seems to me instinctively like a fairly well understood problem, with vaccines already available for Meningitis and in development for Group B strep. You might have found something promising in your research but it seems unlikely.
Wow what a great answer appreciate it!
Agreed looking historically as well there’s every reason to think that war is more likely to accellerate technology development. In this case as well alignment focus is likely to disappear completely if there is a serious war.
Dem drones will be unleashed with the most advanced AI software, safety be damned.
I would put a huge reduction in investment as way higher than 30% - investment cycles boom and bust as does the economy. Even a global recession or similar could massively reduce AI expenditure while AI development continued marching on at a similar or only slightly reduced rate.
On the other hand the current crypto winter does match the OPs definition, with practical use of crypto reducing along with investment reducing.
In general though I agree with you that looking at investment figures isn’t a robust way to define a “winter”.
On the data front, it seems like Chat GPT and other AIs don’t have access to the mass of peer reviewed journals yet. Obviously this isn’t (relatively speaking) a huge quantity of data, but the quality would be orders of magnitude higher than what they are looking at now. Could access to these change things much at all?
Great article really nice job!
I really like your table at the end, and I’d like to challenge a few of your estimates there. I think we ignore and underrate “hometown advantage” (to steal a sports phase). This is just my limited experience and weak intuition talking—it’s very hard to put numbers on comparative advantage.
Part of the reason I think we ignore hometown advantage that most EAs interested in global development live in such rich countries, that the multiplier for “targeting the most vulnerable” as you put it might be so high, perhaps 50-100x for work outside their countr rather than 10x as in your case which makes hometown comparative advantages largely irrelevant.I love the way you put “8x multiplier” for Local network and credentials leading to a greater influence and leverage within the Colombian government. I agree with this strongly.
To that I would potentially add other multipliers for working in Columbia
- Ability to leverage language and deep cultural understanding to be more effective x 2
- Don’t waste time understanding the local landscape (health system/ economy/politcal system) x 1.5
- Use local knowledge and networks to identify the most tractable/neglected issues x 1.5
- Happiness/contentment of being closer to home increasing productivity x 1.5You might also consider poorer countries around you (e.g. Bolivia) where you would retain some of these competitive advantages, while also being able to target more vulnerable populations.
A couple of other comments too
I think doing a masters or something abroad, then coming back and working in Columbia might be a good option. I don’t see the connection between studying abroad and working in another country
Also I don’t really understand your “Ability to choose the most effective organisations” multiplier in your chart. Why would this increase outside of Columbia? Also you could start your own ;).
Not only courtesy, but also future hope (which I think may be more important here).
Yeah it’s really hard to test. I think validity of point estimates are pretty reasonable for wellbeing surveys and I agree with most of the reasoning on this post.
It’s very had to test those biases ethically, but probably possible. Not in this kind of survey anyway.
The reasons he gave for not being worried about those biases were not unreasonable, but based on flimsy evidence. Especially future hope bias which may not have been researched at all.
Amazing I think this is a great (if fairly intuitive) concept, and I feel like this post might deserve more attention.
I think I do this quite a lot, but I haven’t seen this crystallised so well before. I think we should all be sanity checking all the time.
I did have to sanity check one of your sanity checks though. Some “Neglected diseases” (as defined by the WHO) actually affect lots of people. E.g. Shistosomiasis infects something like 340 million people and might cause something like 2 million DALYs a year, which is hardly chicken feed ;)
Also am honoured (sort of) that you included my analysis of OneDay Health in your examples haha
Thanks so much, this was unusually clearly written, with a small percentage of technicality a global health chump like me couldn’t understand, but I still could understand most of it. Please write more!
My initial reaction is, let’s assume you are right and Alignment is nowhere near as difficult as Yudkowsky claims.
This might not be relevant to your point that alignment might not be so hard, but it seemed like your arguments assume that the people making the AI are shooting for alignment, not misalignment.
For example your comment As far as I can tell, the answer is: don’t reward your AIs for taking bad actions.
What if someone does decide t reward it for that? Then do your optimistic arguments still hold? Maybe this is outside the scope of your points!
I really like this, thanks!
Another point to perhaps add (not well formed thought) is that 2 groups may be doing the exact same thing with the exact same outcome (say 2 vaccine companies), but because they have such different funding sources and/or political influence there remains enormous counterfactual good.
For example in Covid, many countries for political reasons almost “had” to have their own vaccine so they could produce the vaccine themselves and garner trust in the population. I would argue that none of America, China and Russia would have freely accepted each other’s vaccines, so they had to research and produce their own even if it didn’t make economic sense. The counterfactual value was their not because the vaccine was “needed” in a perfect world, but because it was needed in the weird geopolitcal setup that happens to exist. If those countries hadn’t invented and produced their own vaccines, there would have been huge resistance in importing one from another country. Even if it was allowed and promoted how many Americans would have accepted using sinovax?
OR 2 NGOs could do the same thing (e.g. giving out bednets), but have completely different sources of funding. One could be funded by USAID and the other by DIFID. It might be theoretically inefficient to have 2 NGOs doing the same thing, but in reality they do double the good and distribute twice as many nets because their sources of income don’t overlap at all.
The world is complicated
I didn’t express this so well but I hope you get the jist....
This is fantastic! I will be there at the online academic workshop (as long as I remember)
Gotcha thanks that makes sense.
I love this thanks!
One thing, I don’t understand how a boycott of one paid AI takes us out of the conversation. Why do we need the LLMs t help us double down on communication?
Do you mean we need to show people the LLMs dodgy mistakes to help our argument?
Great points thanks so much, agree with almost all of it!
We’ve obviously had different experience of activists! I have a lot of activist friends, and my first instincts when I think of activists are people who
1. Understand the issue they are campaigning for extremely well, without
2. Have a clear focus and goal that they want to achieve
2. Are beholden to their ideology yes but not to any political party because they know political tides change and becoming partisan won’t help their causeAlthough I definitely know a few who fit your instincts pretty well ;)
That’s a really good point about the AI policy experts not being sure where to aim their efforts, so how would activists know where to aim theirs? Effective traditional activism needs clear targets and outcomes. A couple of points on the slightly more positive end supporting activism.
At this early stage we are at where very few people are even aware of the potential of AI risk, could raising public awareness be a legitimate purpose to actvism? Obviously when most people are aware and on board with the risk, then you need the effectiveness at changing policy you discussed.
AI activists might be more likely to be EA aligned, so optimistically more likely to be in that small percentage of more focused and successful activists?
First I don’t agree with your assumption that hunter gatherers might are likely their wellbeing the same as ours now. The best proxy we might have for “hunter gatherers” today is poorer, less developed countries. People in those countries have on average have lower average wellbeing than richer countries. My assumption would be in the other direction, that hunter gatherers would most likely rate their wellbeing lower than we would today.
I don’t really understand your argument in this paragraph “Is the difference in WELLBYs significant enough to justify the hundreds of trillions of dollars and hours of effort and suffering (and negative WELLBYs) that have gone (and continue to go) into technological, economic and cultural development to give us our modern lives?
The answer surely is a resounding yes! If the hunter gatherers rated their wellbeing lower than us and our wellbeing has improved, then surely all that effort into “technological economic and cultural development” is completely worth it!
- 20 Mar 2023 11:09 UTC; 17 points) 's comment on Why I’m suss on wellbeing surveys by (
Wow that’s a great point Sanjay I love it and agree! I’ve even thought about writing something about AI activism like “Does AI safety need activists as much as alignment researchers?” but its not my field. It’s weird to me that there doesn’t seem to already be a strong AI safety activist movement. I feel like the EA community supports activism fairly well, but perhaps a lot of the skills and personal characteristics of those working within the AI safety community don’t lean in the activist direction? Don’t know nearly enough about it to be honest.
Pros and Cons of boycotting paid Chat GPT
Thanks for this, it is interesting and important.
I don’t however think these issues with point estimates are biggest problem with wellbeing research, these issues are important yes for calibration, but a bigger problem is whether reported increases in wellbeing after an intervention are real or biased. I have said this before, apologies for being a stuck record.
These two biases which don’t necessarily affect point estimates (like you discuss above) but affect before and after measurements...
-
Demand/ courtesy bias. Giving higher wellbeing score after the intervention because you think that is what the researcher wants.
-
“Future hope” bias. Giving higher scores after any intervention, thinking (often rationally and correctly) that the positive report will make you more likely to get other, even different types of help in future. This could be a huge problem in surveys among the poor but there’s close to no research on it.
These might be hard to research and are undrafted, but I think it is important to try.
We should keep in mind though these two bias don’t only affect wellbeing surveys, but to some degree any self reported survey, for example the majority of give directly’s data.
-
@Jason seems in your area any thoughts?
In any social policy battle (climate change, racial justice, animal rights) there will be people who believe that extreme actions are necessary. It’s perhaps unusual on the Al front that one of the highest profile experts is on that extreme, but it’s still not an unusual situation. A couple of points in favour of this message having a net positive effect.
I don’t buy the debate that extreme arguments alienate people about the cause in general. This is a common assumption, but the little evidence we have suggests that extreme actions or talk might actually both increase visibility of the cause and increase support to more moderate groups. Anecdotally on the AI front @lilly seems to be seeing something similar too.
On a rational front, if he is this sure of doom, his practical solution seems to make the most sense. It shows intellectual integrity. We can’t expect someone to have a pdoom of 99% given the status quo, then just suggest better alignment strategies. From a scout mindset perspective, we need to put ourselves in the 99% doom shoes before dismissing this opinion as irrational, even if we strongly disagree with his pdoom.
(Related to 1), I feel like AI risk is still perhaps at the “Any publicity is good publicity” stage as many people are still completely unaware of it. Anything a bit wild like this which attracts more attention and debate is likely to be good. Within a few months/years this may change though as AI risk becomes truly mainstream. Outside tech bubbles it certainly isn’t yet.