I’m begging you to just get a normal job and give to effective charities.
Doctor in Australia giving 10% forever
Henry Howard
I think overall this post plays into a few common negative stereotypes of EA: Enthusiastic well-meaning people (sometimes with a grandiose LoTR reference username) proposing grand plans to solve an enormously complex problem without really acknowledging or understanding the nuance.
Suggesting that we simply develop an algorithm to identify “high quality content” and that a combination of crowds and experts will reliably be able to identify factual vs non-factual information seems to completely miss the point of the problem, which is that both of these things are extremely difficult and that’s why we have a disinformation crisis.
Many good points:
-
Use of expected value when error bars are enormously wide is stupid and deceptive
-
EA has too many eggs in the one basket that is GiveWell’s research work
-
GiveWell under-emphasises the risks of their interventions and overstates their certainty of their benefits
-
EA is full of young aspiring heroes who think they’re the main character in a story about saving the world
-
Longtermism has no feedback mechanism and so is entirely speculative, not evidence-based
-
Mob think is real (this forum still gives people with more karma more votes for some reason)
But then:
-
His only suggestions for a better way to reallocate power/wealth/opportunity from rich to poor are: 1. acknowledging that it’s complex and 2. consulting with local communities (neither are new ideas, both are often already done)
-
Ignores the very established, non-EA-affiliated body of development economists using RCTs; Duflo and Banerjee won the Nobel memorial economics prize for this and Dan Karlan who started Innovations for Poverty Action now runs USAID. EA might be cringe but these people aren’t.
-
Sounds very difficult when deadly drugs like fentanyl, midazolam and propofol can easily be injected through an intravenous line. You can’t get an IV line on a baby in-utero, I think that’s why injection into the heart is done in that case.
-
The massive error bars around how animal well-being/suffering compares to that of humans means it’s an unreliable approach to reducing suffering.
-
Global development is a prerequisite for a lot of animal welfare work. People struggling to survive don’t have time to care about the wellbeing of their food.
-
Aside from impossibility of quantifying fetal suffering with any certainty and the social and political intractability of this idea: potassium chloride is often directly injected into the fetal heart, not the veins, so the comparison to lethal injection or animal euthanasia might be wrong
Doesn’t pass the sniff test for me. Two concerns:
Every vegetarian I’ve met or heard of is vegetarian because of either a) animal welfare, b) climate change or c) cultural tradition. It seems very unlikely that any of these factors could be strongly genetic.
They’re determining genetic heritability by comparing identical twin pairs with non-identical twin pairs (i.e. if the identical twins are more similar in their preferences than non-identical twins, they assume that there’s more of a genetic component). I imagine that there could be lots of confounders here. Growing up as an identical twin is a different experience to being a non-identical twin. There could be different environmental factors between the two situations (e.g. maybe identical twins tend to feel closer and more closely mimic each other’s behaviours/choices).
If any of these think tanks had good evidence that their strategy reliably affected economic development, the strategy would quickly be widely adopted and promoted by the thousands of economic development researchers and organisations striving to find such a strategy. Economic development is not a neglected or underfunded field.
Development economics is a full-fledged academic field. Very intelligent people have been working very hard on finding way to improve economic development for many years. Unlikely that outsiders on an internet forum will see neglected solutions.
Would be ecstatic to be proven wrong. In the meantime this sort of post makes the community look arrogant and out of touch.
The error bars on the Rethink Priorities’ welfare ranges are huge. They tell us very little, and making calculations based on them will tell you very little.
I think without some narrower error bars to back you up, making a post suggesting “welfare can be created more efficiently via small non-human animals” is probably net negative, because it has the negative impact of contributing to the EA community looking crazy without the positive impact of a well-supported argument.
I think you could say this about any problem. Instead of working on malaria prevention, freeing caged chickens or stopping climate change should we just all switch to working on AI so it can solve the problems for us?
I don’t think so, because:a. I think it’s important to hedge bets and try out a range of things in case AI is many decades away or it doesn’t work out
and
b. having lots more people working on AI won’t necessarily make it come faster or better (already lots of people working on it).
This seems to rest heavily on Rethink Priorities’ Welfare Estimates. While their expected value for the “welfare range” of chickens is 0.332 that of humans, their 90% confidence for that number spans 0.002 to 0.869, which is so wide that we can’t make much use of it.
Seems to be a tendency in EA to try to use expected values when just admitting “I have no idea” is more honest and truthful.
Most suffering in the world happens in farms.
You state this like it’s a fact but it’s heavily dependent on how you compare animal and human suffering. I don’t think this is a given. Formal attempts to compare animal and human suffering like Rethink Priorities’ Animal Welfare Estimates have enormous error bars.
Worthy being cautious in a world where ~10% of the world live on <$2 a day.
It kills ~350,000 people a year. The fatality rate isn’t as important as the total deaths.
“Only prolongs existence”
Preventing malaria stops people from suffering from the sickness, prevents grief from the death of that person (often a child), and boosts economies by decreasing sick days and reducing the burden on health systems
The “terrible trifecta” of: trouble getting started, keeping focused, and finishing up projects seems universally relatable. I don’t know many people who would say they don’t have trouble with each of these things. Drawing this line between normal and pathological human experiences is very difficult and is why the DSM-V criteria are quite specific (and not perfect).
It might be useful to also interview people without ADHD, to differentiate pathological ADHD symptoms from normal, universal human experiences.
The risks of overdiagnosis include:
People can develop unhealthy cognitive patterns around seeing themselves as having a “disease” when they’re actually just struggling with the standard human condition
They might receive harmful interventions that they don’t need
It adds unnecessary burden to health systems.
The step that’s missing for me is the one where the paperclip maximiser gets the opportunity to kill everyone.
Your talk of “plans” and the dangers of executing them seems to assume that the AI has all the power it needs to execute the plans. I don’t think the AI crowd has done enough to demonstrate how this could happen.
If you drop a naked human in amongst some wolves I don’t think the human will do very well despite its different goals and enormous intellectual advantage. Similarly, I don’t see how a fledgling sentient AGI on OpenAI servers can take over enough infrastructure that it poses a serious threat. I’ve not seen a convincing theory for how this would happen. Mailorder nanobots seem unrealistic (too hard to simulate the quantum effects in protein chemistry), the AI talking itself out of its box is another suggestion that seems far-fetched (main evidence seems to be some chat games that Yudkowsky played a few times?), a gradual takeover by its voluntary uptake into more an more of our lives seems slow enough to stop.
I’m a doctor and I think there’s a lot of underappreciated value in medicine including:
Clout: Society grants an inappropriate amount of respect to doctors, regardless of whether they’re skilled or not, junior or senior. If you have a medical degree people respect you, listen to you, take you more seriously.
Hidden societal knowledge: Not many people get to see as broad a cross-section of society as you see studying medicine. You meet people at their very best and worst, you meet incredibly knowledgeable people and people that never learnt to read, people who have lived incredible lives and people who have been through trauma that you couldn’t imagine. You gain an understanding of how broad the spectrum of human experience is. It’s humbling and grounding.
Social skills: Medicine is a crash course on how not to be cripplingly socially awkward (not everyone passes with flying colours). You become better at relating to people, making them feel comfortable, talking about difficult topics, navigating conflict. These are all highly transferable skills.
Latent medical knowledge: There’s a real freedom in being comfortable knowing when and when not to go to the hospital. Some people go to the Emergency Department every time they have a stomach ache, just in case. Learning medicine means you have a general idea about what problems are actually worth worrying about.
Job security: You can be pretty sure you’ll always have a job no matter what (until GPT-6 arrives, but that applies to anything).
Opens doors: Studying med doesn’t mean you need to be a doctor. You can use the insider knowledge of the medical field in med tech (not many doctors can code, useful combo), or to work in medical research (make some malaria vaccines) or global health.
I don’t feel like my work as a doctor is directly very impactful (I mostly do hospital paperwork). But I gave 50% of my income in my first year and I’m giving 10% of my income since. In this way you can have a lot of positive impact.
I feel the weakest part of this argument, and the weakest part of the AI Safety space generally, is the part where AI kills everyone (part 2, in this case).
You argue that most paths to some ambitious goal like whole-brain emulation end terribly for humans, because how else could the AI do whole-brain emulation without subjugating, eliminating or atomising everyone?I don’t think that follows. This seems like what the average hunter-gatherer would have thought when made to imagine our modern commercial airlines or microprocessor industries: how could you achieve something requiring so much research, so many resources and so much coordination without enslaving huge swathes of society and killing anyone that gets in the way? And wouldn’t the knowledge to do these things cause terrible new dangers?
Luckily the peasant is wrong: the path here has led up a slope of gradually increasing quality of life (some disagree).
“estimate… will not change much in response to new information” seems like the definition of certainty.
It seems very optimistic to think that by doing enough calculations and data analysis we can overcome the butterfly effect. Even your example of the correlation between population and economic growth is difficult to predict (e.g. Concentrating wealth by reducing family size might have positive effects on economic growth)
This doesn’t seem very useful. All well and good to declare that lots of animals might have “conscious experience”, but without a way to define “conscious experience” or having any way to compare the value of the “conscious experience” of different animals, where does it get us?
I worry that this is just abstract philosophical noise that distracts from productive efforts like developing alternative proteins, exposing and lobbying against the cruelty of factory farming, and eliminating the poverty and desperation that underlies a lot of the global indifference to animal suffering.