Strong advocate of just having a normal job and give to effective charities.
Doctor in Australia giving 10% forever
Henry Howardđ¸
I think you could say this about any problem. Instead of working on malaria prevention, freeing caged chickens or stopping climate change should we just all switch to working on AI so it can solve the problems for us?
I donât think so, because:a. I think itâs important to hedge bets and try out a range of things in case AI is many decades away or it doesnât work out
and
b. having lots more people working on AI wonât necessarily make it come faster or better (already lots of people working on it).
This seems to rest heavily on Rethink Prioritiesâ Welfare Estimates. While their expected value for the âwelfare rangeâ of chickens is 0.332 that of humans, their 90% confidence for that number spans 0.002 to 0.869, which is so wide that we canât make much use of it.
Seems to be a tendency in EA to try to use expected values when just admitting âI have no ideaâ is more honest and truthful.
Most suffering in the world happens in farms.
You state this like itâs a fact but itâs heavily dependent on how you compare animal and human suffering. I donât think this is a given. Formal attempts to compare animal and human suffering like Rethink Prioritiesâ Animal Welfare Estimates have enormous error bars.
Worthy being cautious in a world where ~10% of the world live on <$2 a day.
It kills ~350,000 people a year. The fatality rate isnât as important as the total deaths.
âOnly prolongs existenceâ
Preventing malaria stops people from suffering from the sickness, prevents grief from the death of that person (often a child), and boosts economies by decreasing sick days and reducing the burden on health systems
The âterrible trifectaâ of: trouble getting started, keeping focused, and finishing up projects seems universally relatable. I donât know many people who would say they donât have trouble with each of these things. Drawing this line between normal and pathological human experiences is very difficult and is why the DSM-V criteria are quite specific (and not perfect).
It might be useful to also interview people without ADHD, to differentiate pathological ADHD symptoms from normal, universal human experiences.
The risks of overdiagnosis include:
People can develop unhealthy cognitive patterns around seeing themselves as having a âdiseaseâ when theyâre actually just struggling with the standard human condition
They might receive harmful interventions that they donât need
It adds unnecessary burden to health systems.
Ghana has apÂproved the use of a malaria vacÂcine with >70% efficacy
The step thatâs missing for me is the one where the paperclip maximiser gets the opportunity to kill everyone.
Your talk of âplansâ and the dangers of executing them seems to assume that the AI has all the power it needs to execute the plans. I donât think the AI crowd has done enough to demonstrate how this could happen.
If you drop a naked human in amongst some wolves I donât think the human will do very well despite its different goals and enormous intellectual advantage. Similarly, I donât see how a fledgling sentient AGI on OpenAI servers can take over enough infrastructure that it poses a serious threat. Iâve not seen a convincing theory for how this would happen. Mailorder nanobots seem unrealistic (too hard to simulate the quantum effects in protein chemistry), the AI talking itself out of its box is another suggestion that seems far-fetched (main evidence seems to be some chat games that Yudkowsky played a few times?), a gradual takeover by its voluntary uptake into more an more of our lives seems slow enough to stop.
Why is alÂmost evÂery post on the front page about AI safety?
Iâm a doctor and I think thereâs a lot of underappreciated value in medicine including:
Clout: Society grants an inappropriate amount of respect to doctors, regardless of whether theyâre skilled or not, junior or senior. If you have a medical degree people respect you, listen to you, take you more seriously.
Hidden societal knowledge: Not many people get to see as broad a cross-section of society as you see studying medicine. You meet people at their very best and worst, you meet incredibly knowledgeable people and people that never learnt to read, people who have lived incredible lives and people who have been through trauma that you couldnât imagine. You gain an understanding of how broad the spectrum of human experience is. Itâs humbling and grounding.
Social skills: Medicine is a crash course on how not to be cripplingly socially awkward (not everyone passes with flying colours). You become better at relating to people, making them feel comfortable, talking about difficult topics, navigating conflict. These are all highly transferable skills.
Latent medical knowledge: Thereâs a real freedom in being comfortable knowing when and when not to go to the hospital. Some people go to the Emergency Department every time they have a stomach ache, just in case. Learning medicine means you have a general idea about what problems are actually worth worrying about.
Job security: You can be pretty sure youâll always have a job no matter what (until GPT-6 arrives, but that applies to anything).
Opens doors: Studying med doesnât mean you need to be a doctor. You can use the insider knowledge of the medical field in med tech (not many doctors can code, useful combo), or to work in medical research (make some malaria vaccines) or global health.
I donât feel like my work as a doctor is directly very impactful (I mostly do hospital paperwork). But I gave 50% of my income in my first year and Iâm giving 10% of my income since. In this way you can have a lot of positive impact.
I feel the weakest part of this argument, and the weakest part of the AI Safety space generally, is the part where AI kills everyone (part 2, in this case).
You argue that most paths to some ambitious goal like whole-brain emulation end terribly for humans, because how else could the AI do whole-brain emulation without subjugating, eliminating or atomising everyone?I donât think that follows. This seems like what the average hunter-gatherer would have thought when made to imagine our modern commercial airlines or microprocessor industries: how could you achieve something requiring so much research, so many resources and so much coordination without enslaving huge swathes of society and killing anyone that gets in the way? And wouldnât the knowledge to do these things cause terrible new dangers?
Luckily the peasant is wrong: the path here has led up a slope of gradually increasing quality of life (some disagree).
âestimate⌠will not change much in response to new informationâ seems like the definition of certainty.
It seems very optimistic to think that by doing enough calculations and data analysis we can overcome the butterfly effect. Even your example of the correlation between population and economic growth is difficult to predict (e.g. Concentrating wealth by reducing family size might have positive effects on economic growth)
I disagree with the assumption that those +1000/â-1000 longterm effects can be known with any certainty, no matter how many resources you spend on studying them.
The world is a chaotic system. Trying to predict where the storm will land as the butterfly flaps its wings is unreasonable. Also, some of the measures youâre trying to account for (e.g. the utility of a wild animalâs life) are probably not even measurable. The combination of these two difficulties makes me very dubious about the value of trying to do things like factor in long-term mosquito wellbeing to bednet effectiveness calculations, or trying to account for the far-future risks/âbenefits of population growth when assessing the value of vitamin supplementation.
I think attempting to account for every factor is a dead end when those factors themselves have huge uncertainty around them.
e.g.:
Thereâs huge uncertainty around whether increasing human population is inherently good or bad.
Thereâs huge uncertainty around when a wild animalâs life is worth living.
Thereâs huge uncertainty about how any given intervention now will positively or negatively affect the far future.
I think when analyses ignore these considerations itâs not because theyâre being lazy, itâs simply an acknowledgment that itâs only worth working with factors we have some certainty about, like that vitamin deficiencies and malaria are almost certainly bad
A couple of problems I have with this analysis:
Excluding everything except the longtermist donations seems irrational. There is a lot of uncertainty around whether longtermist goals are even tractable, let alone whether the current longtermist charities are making or will make any useful progress (your link to 80,000 Hoursâ 18 Most Pressing Problems is broken, but their pressing areas seem to include AI safety, preventing nuclear war, preventing great power conflict, improving governance, each of which have huge question marks around them when it comes to solutions). I think youâre overestimating the certainty and therefore value of the projects focusing on âcreating a better futureâ
You should account for the potential positive sociopolitical effects that might come from a large bloc of professionals openly pledging a portion of income to effective charity. It has a potential to subtly normalise effective charity in the public consciousness, leading to more people donating more money to more effective charities and governments allocating more aid money more effectively. This theory of change is difficult to measure or prove, as for any social movement, but I donât think it should be ignored.
Looking at preventative health as a cost-effective global health measure is great! Havenât read this report in full but some problems stick out to me at a glance:
1. I donât think hypertension is neglected at all. Some of the worldâs most commonly prescribed drugs are for hypertension (Lisinopril, Amlodipine, Metoprolol are no. 3,4,5 per Google). I also donât think salt reduction is a neglected treatment: Almost every person presenting to a doctor with hypertension will be recommended to reduce their salt intake.
2. It doesnât seem very effective:
sodium intake significantly reduces resting systolic blood pressure (n.b. Aburto et al: â3.39 mm Hg)
...
every 10 mm Hg fall in BP sees a reduction in risk of major cardiovascular disease events given a relative risk â RR â of 0.8
3.39 mmHg doesnât seem like very much, given that a 10mmHg fall is required for a 20% risk reduction if cardiovascular disease risk.
3. People hate being taxed for doing things they like
I donât find your analysis of the reduction of freedom of choice to be very convincing. You dismiss the reduction of freedom of choice because:
food people are eating will be largely the same in terms of macro ingredients, and will taste subjectively the same given reduction within a range as well as gradual implementation
I donât think this is true. Salt is yummy and people know it. Most people with hypertension are already told to reduce their salt intake and many choose not to. They make that choice for a reason, and forcing them or taxing them for doing so would, I think, lead to significant resentment, resistance and distrust of government. People are already suspicious of over-regulation and of the WHO, and I think even a campaign for this sort of thing might cause more trouble than the small chance of success is worth.
Extremely cringe article.
The argument that AI will inevitably kill us has never been well-formed and he doesnât propose a good argument for it here. No-one has proposed a reasonable scenario by which immediate, unpreventable AI doom will happen (the protein nanofactories-by-mail idea underestimates the difficulty of simulating quantum effects on protein behaviour).
A human dropped into a den of lions wonât immediately become its leader just because the human is more intelligent.
The way you describe WELLBYsâas being heavily influenced by the hedonic treadmill and so potentially unable to distinguish between the wellbeing of the Sentinelese and the modern Londonerâseems to highlight their problems. Thereâs a good chance a WELLBY analysis would have argued against the agricultural revolution, which doesnât seem like a useful opinion.
No itâs not obvious, but the implications are absurd enough (agricultural revolution was a mistake, cities were a mistake) that I think itâs reasonable to discard the idea
The error bars on the Rethink Prioritiesâ welfare ranges are huge. They tell us very little, and making calculations based on them will tell you very little.
I think without some narrower error bars to back you up, making a post suggesting âwelfare can be created more efficiently via small non-human animalsâ is probably net negative, because it has the negative impact of contributing to the EA community looking crazy without the positive impact of a well-supported argument.