Strong advocate of just having a normal job and give to effective charities.
Doctor in Australia giving 10% forever
Henry Howardđ¸
Thatâs a fair point. At either end of the extreme of outcomes: âASI kills us allâ or âASI quickly uplifts everyone out of povertyâ almost all decisions/âactions we make today are pretty meaningless.
But if the next few decades fall somewhere between those two extremes, which I think they probably will, the impact of improving peopleâs lives remains substantial.
(NOTE: Coming at this from a place of: a. ignorance of what the AI Safety community actually does and b. not wanting to take the ego hit of admitting that I have been wrong about my long-held skepticism of AI Safety)
I think it was and is fair to be skeptical of the shift to AI Safety in EA on the basis that itâs not that tractable, and that thereâs thereâs not clear evidence that the AI Safety movement has had a positive effect on the trajectory of AI.
âBut it brought the ideas into the mainstreamâ
I think the AI Safety community will be tempted to think theyâve normalised in the zeitgeist ideas about superintelligent AIs and the philosphical questions and risks that arise from them, but 2001: A Space Odyssey came out in 1968, Terminator in 1984 and The Matrix in 1999 etc.. The ideas of superintelligant AIs and the existential risks of them are diffused through modern culture and itâs possible that The Pope and The UN would have made the same statements about them given the recent progress of LLMs regardless of the AI Safety movement.
Are there many ideas in If Anyone Builds It, Everyone Dies that werenât broadly covered in Terminator/âThe Matrix/â2001 a Space Odyssey/âDune etc.?
âBut the work theyâve done has set us on the right pathâ
I havenât seen strong evidence for the direct work of the AI Safety movement reducing existential risks from AI:
Amanda Askellâs involvement with shaping the character of Claude sounds good. Has it made much difference or is it just putting a nice and brittle mask on the beast?
AI Safety organisations like MIRI an Redwood Research have been operating for 25 and 5 years respectively. As an outsider I coudnât point to any particular breakthrough theyâve made in AI alignment. Redwood seems to do some kinda interesting work on measuring rogue behaviour and creating checks. I dunno. Seems like any organisation trying to make a reliable AI product would be heavily incentivised to do this stuff regardless.
In Australia Good Ancestors has probably contributed in some way to the governmentâs decision to potentially open an AI Safety Institute here. The statements the government puts out about them seem to mostly emphasise deepfake porn and the threat to peopleâs jobs rather than existential risks, which makes me think that this decision might have just happened anyway regardless of the AI Safety movement.
Interpretability research seems far from being able to understand more than a few components at a time. And also the companies making AI would likely have been incentivised to do this work regardless of the AI Safety movement because customers donât want a black box.
From the outside it seems thereâs a good argument that the AI situation would have evolved pretty similarly regardless of EA/âAI Safety input.
From that position, itâs easy to believe that if EA had just stuck to Earning To Give and malaria nets and decaging chickens then the impact would have been greater, both directly and because the movement might not have lost as much momentum when AI Safety alienated people.
I agree that the depth of the evidence conversations doesnât lend itself to amateur discussion on the forum and I also feel like thereâs not much I have to add to the GHD discussions here because of that.
Donât think itâs fair to say itâs not prioritised among the orgs. My understanding is that Coefficient Giving still gives huge amounts to GiveWell charities and grants.
âdirect altruistic focus strategically so as to be of positive utilityâ
Vague and evasive. Say what you mean. If you want to keep poor people poor until some new technology comes out, you should say that. If you donât think further development will ever be justified, you should say that (so that your contention can be discarded as absurd and impractical)
âFrom the sumatriptan RCT: 3% were pain-free at 10 minutes after placebo.â
This is an irrational comparison. Youâre comparing your best case scenario anecdote to the results of an RCT.
Itâs possible that one of those 3% of people would have an anecdote for sumatriptan as convincing as yours: causing rapid resolution of their headache. That anecdote would not be representative.
Iâm not saying youâre wrong about psychedelics and cluster headache. I desperately hope youâre right and there is an easy fix. Anecdote leads people astray constantly and we have to have a high suspicion of it.
âThe effect size is incredible and the percentage of people for whom itâs effective for is very largeââWhatâs the source for this?
Impressive anectodes, but we see a lot of those in medicine. Trial or it didnât happen.
Because development has been the human project for the last 10,000 years and if we accept that it has been and continues to be a mistake then the conclusion is⌠what? anarcho-primitivism/âregressing to pre-industrial hunter-gather life/âReturn to Monke. That doesnât seem very practical.
Fair, I really mean pessimism rather than nihilism. On what basis can you reject philosophical pessimismâa self-consistent and valid belief that is seemingly impossible to prove/âdisproveâother than that it is just not pragmatic or constructive at all.
None of that suggested work seems very clarifying
The welfare ranges are extremely broad for the animals they do cover, and thatâs with questionable assumptions. I donât see how extending these to microbes would clarify anything.
Doing âmore researchâ on the day-to-day experience of nematodes and how they respond to noxious stimuli or calculating their neural energy consumption as a proxy for their ability to suffer also doesnât seem clarifying. Imagine you knew all this information about nematodes. Still the fundamental question will remain how their âsufferingâ or âjoyâ compares to ours and how morally important it is. A lot of animal ethics is driven by our ability to relate to animals (âI can relate somewhat to a chicken and I wouldnât want to be a chicken in a cageâ) but this falls apart by the time we get to nematodes, so you have to rely solely on your numbers, which will be extremely uncertain.
I remain very puzzled how you ever see us getting low enough error bars on the joy/âsuffering of microscopic worms that we could make decision based on it.
How would you get the âFurther human economic developmentâ ânecessary to build the knowledge and resourcesâ to build a better world without supporting the development of developing countries?
Are you talking a top-heavy approach where we keep poor countries poor until fake/âcultured meat is cheap enough to supplant farmed animals?
I guess. Can you formulate an argument against nihilism thatâs any more substantial than that?
The theory that human development has been evil is nihilistic and could well be true, much like the nihilistic theory that the existence of biological life itself could is net evil. On what basis do you reject this other than: âwe canât do anything with thatâ.
It will probably lead to increased suffering of animals (at least for a time) and this is necessary for the greater good of technological development. Weâre forced to consider the technological development a greater good because the alternative is to accept that the last 10,000 years of development was a mistake, which is not a viable belief.
This was named the Meat-Eater Problem in this article in the Journal Of Controversial Ideas by @MichaelPlant (as comments point out, there are many earlier examples).
I think we need to be extremely suspicious of the conclusion that development is bad because of animal suffering. Development has given us everything that makes life better (as most would see it) than in pre-industrial times: antibiotics, vaccines, surgery, food security, shelter, cheap and plentiful access to knowledge and entertainment.
I donât see how you can accept the Meat-Eater Problem without also concluding that all human development in the last 10,000 years has been a mistake in light of the horrible toll weâve demanded of the workhorses and mulesed sheep and caged chickens that we tortured along the way. The Ted Kaczynskiist view that the development of society has been overall bad is internally consistent and valid but also crazy and just not compatible with any sort of continued functioning of society.
To avoid this absurd conclusion that would lead us all to nihilism or posting explosive letters, I think we have to accept that development so far has been worth the costs, and that further development, for similar benefits, will be worth the additional costs.
Can you give some examples of what research you could do to improve our understanding about either 1. whether soil microbes are sentient, or 2. whether their average life experience is net positive or negative?
These both seem completely unanswerable. with a billion dollars and no other interests I wouldnât know where to begin answering these questions.
This report from 2006 has similarly high numbers of surveyed people saying that psilocybin or LSD aborted their headaches https://ââpubmed.ncbi.nlm.nih.gov/ââ16801660/ââ
Thatâs 19 years for someone to do a controlled trial of cessation of cluster headaches using psilocybin or LSD vs placebo or triptan control. Wouldnât have to very big numbers either if the anecdotes are to be believed.I guess your theory is that there have been too many funding and legal blocks to get this done in that 19 years. Seems hard to believe. Terrible if true.
If it is true, would recommend you focus on this as your core advocacy point: we need a placebo-controlled cluster cessation trial of psychedelics (rather than just prophylaxis). Saying âThe Best Treatment for the Most Painful Medical Condition Is Illegalâ is an unproven statement and makes you seem unserious
Inspiring
I saw so many people who wanted a âjob in EAâ. They wanted to directly do the good. Have they really thought through the bitter truth? Why do you believe you are uniquely good at an EA job, why ignore the simple premise of earning to give?
I think thereâs a large number of EAs who earn to give and spend their time focusing on their career rather than spending time reading another 5,000 word forum article on shrimp or going to EA meetups. This is probably the right move if the goal is to earn as much as possible.
People who want âEA jobsâ are more likely to be involved in the forum and in community events.
Then it should be quite easy to show this benefit in clinical trials and itâs suspicious that it hasnât happened
Yes but my point is that whether the AI Safety community has moved the dial on interpretability or government interest is unclear and worth being skeptical of