wanted the post to focus specifically upon how difficult it seems to avoid the conclusion of prioritizing animal welfare in neartermism
I wasn’t familiar with these other calculations you mention. I thought you were just relying on the RP studies which seemed flimsy. This extra context makes the case much stronger.
Sadly, I don’t think that approach is correct. The 5th percentile of a product of random variables is not the product of the 5th percentiles—in fact, in general, it’s going to be a product of much higher percentiles (20+).
I don’t think that’s true either.
If you’re multiplying noramlly distributed distributions, the general rule is that you add the percentage variances in quadrature.
Which I don’t think converges to a specific percentile like 20+. As more and more uncertainties cancel out the relative contribution of any given uncertainty goes to zero.
IDK. I did explicitly say that my calculation wasn’t correct. And with the information on hand I can’t see how I could’ve done better. Maybe I should’ve fudged it down by one OOD.
FWIW, I started very pro-neuron counts (I defended them here and here), and then others at RP, collaborators and further investigation myself moved me away from the view.
Oh, interesting. That moves my needle.
Thanks for responding to my hot takes with patience and good humour!
Your defenses and caveats all sound very reasonable.
the relevant vertebrates are probably within an OOM of humans
So given this, you’d agree with the conclusion of the original piece? At least if we take the “number of chickens affected per dollar” input as correct?
I analyzed OP’s grants data
FYI, I made a spreadsheet a while ago which automatically pulls the latest OP grants data and constructs summaries and pivot tables to make this type of analysis easier.
I also made these interactive plots which summarise all EA funding:
As I see it, we basically have a choice between:
simple methodology to make vaguely plausible guesses about the unknowable phenomology of chickens (cortical neuron count)
complex methodology to make vaguely plausible guesses about the unknowable phenomology of chickens (other stuff)
I much prefer the simple methodology where we can clearly see what assumptions we’re making and how that propagates out.
by that logic, two chickens have the same moral weight as one chicken because they have the same functions and capacities, no?
oh true lol
ok, animal charities still come out an order of magnitude ahead of human charities given the cage-free campaigns analysis and neuron counts
but the broader point is that the RP analyses seem far from conclusive and it would be silly to use them unilaterally for making huge funding allocation decisions, which I think still stands
RP’s moral weights and analysis of cage-free campaigns suggest that the average cost-effectiveness of cage-free campaigns is on the order of 1000x that of GiveWell’s top charities. Even if the campaigns’ marginal cost-effectiveness is 10x worse than the average, that would be 100x.
This seems to be the key claim of the piece, so why isn’t the “1000x” calculation actually spelled out?
The “cage-free campaigns analysis” estimates
how many chickens will be affected by corporate cage-free and broiler welfare commitments won by all charities, in all countries, during all the years between 2005 and the end of 2018
This analysis gives chicken years affected per dollar as 9.6-120 (95%CI), with 41 as the median estimate.
The moral weights analysis estimates “welfare ranges”, ie, the difference in moral value between the best possible and worst possible experience for a given species. This doesn’t actually tell us anything about the disutility of caging chickens. For that you would need to make up some additional numbers:
Welfare ranges allow us to convert species-relative welfare assessments, understood as percentage changes in the portions of animals’ welfare ranges, into a common unit. To illustrate, let’s make the following assumptions:Chickens’ welfare range is 10% of humans’ welfare range.Over the course of a year, the average chicken is about half as badly off as they could be in conventional cages (they’re at the ~50% mark in the negative portion of their welfare range).Over the course of a year, the average chicken is about a quarter as badly off as they could be in a cage-free system (they’re at the ~25% mark in the negative portion of their welfare range).
Welfare ranges allow us to convert species-relative welfare assessments, understood as percentage changes in the portions of animals’ welfare ranges, into a common unit. To illustrate, let’s make the following assumptions:
Chickens’ welfare range is 10% of humans’ welfare range.
Over the course of a year, the average chicken is about half as badly off as they could be in conventional cages (they’re at the ~50% mark in the negative portion of their welfare range).
Over the course of a year, the average chicken is about a quarter as badly off as they could be in a cage-free system (they’re at the ~25% mark in the negative portion of their welfare range).
Anyway, the 95%CI for chicken welfare ranges (as a fraction of human ranges) is 0.002-0.869, with 0.332 as the median estimate.
So if we make the additional assumptions that:
All future animal welfare interventions will be as effective as past efforts (which seems implausible given diminishing marginal returns)
Cages cause chickens to lose half of their average welfare (a totally made up number)
Then we can multiply these out to get:
The “DALYs / $ through GiveWell charities” comes from the fact that it costs ~$5000 to save the life of a child. Assming “save a life” means adding ~50 years to the lifespan, that means $100 / DALY, or 0.01 DALYs / $.
A few things to note here:
There is huge uncertainty here. The 95% CI in the table indicates that chicken interventions could be anywhere from 10,000x to 0.1x as effective as human charities. (Although I got the 5th and 95th percentiles of the output by simply multiplying the 5th and 95th percentiles of the inputs. This is not correct, but I’m not sure there’s a better approach without more information about the input distributions.)
To get these estimates we had to make some implausible assumptions and also totally make up some numbers.
The old cortical neuron count proxy for moral weight says that one chicken life year is worth 0.003, which is 1/100th of the RP welfare range estimate of 0.33. This number would mean chicken interventions are only 0.7x as effective as human interventions, rather than 700x as effective. [edit: oops, maths wrong here. see Michael’s comment below.]
But didn’t RP prove that cortical neuron counts are fake?
Hardly. They gave a bunch of reasons why we might be skeptical of neuron count (summarised here). But I think the reasons in favour of using cortical neuron count as a proxy for moral weight are stronger than the objections. And that still doesn’t give us any reason to think RP’s has a better methodology for calculating moral weights. It just tells us to not take cortical counts to literally.
Points in favour of cortical neuron counts as a proxy for moral weight:
Neuron counts correlate with our intuitions of moral weights. Cortical counts would say that ~300 chicken life years are morally equivalent to one human life year, which sounds about right.
There’s a common sense story of: more neurons → more compute power → more consciousness.
It’s a simple and practical approach. Obtaining the moral weight of an arbitrary animal only requires counting neurons.
Compare with the RP moral weights:
If we interpret the welfare ranges as moral weights, then 3 chicken life years are worth one human life year. This is not a trade I would make.
If we don’t interpret welfare ranges as moral weights, then the RP numbers tell us literally nothing.
The methodology is complex, difficult to understand, expensive, and requires reams zoological observation to be applied to new animals.
And let’s not forget second order effects. Raising people out of poverty can increase global innovation and specialisation and accelerate economic development which could have benefits centuries from now. It’s not obvious that helping chickens has any real second order effects.
It’s not obvious to me that RP’s research actually tells us anything useful about the effectiveness of animal charities compared to human charities.
There are tons of assumptions and simplifications that go into these RP numbers, so any conclusions we can draw must be low confidence.
Cortical neuron counts still looks like a pretty good way to compare welfare across species. Under cortical neuron count, human charities come out on top.
I am much more interested in a discussion of what Nonlinear should or shouldn’t do than for Catherine Richardson from Putney to worry if she she is spending too much money on rent
Beaut. Thanks for the detailed feedback!
I think these suggestions make sense to implement immediately:
add boilerplate disclaimer about accuracy / fabrication
links to author pages
note on reading time
group by tags
“Lesswrong” → “LessWrong”
The summaries are in fact generated within a Google sheet, so it does make sense to add a link to that
These things will require a bit of experimentation but are good suggestions:
Agree on the tone being boring. I can think of a couple of fixes:
Prompt GPT to be more succinct to get rid of low information nonsense
Prompt GPT to do bulletpoints rather than paragraphs
Generate little poems to introduce sections
Think about cross pollinating with Type III Audio
should be fixed for the next issue
I’ve identified source of problem and fixed, thanks!
re: the biosecurity map
did you realise that the AIS map is just pulling all the coordinates, descriptions, etc from a google sheet
if you’ve already got a list of orgs and stuff it’s not hard to turn it into a map like the AIS one by copying the code, drawing a new background, and swapping out the URL of the spreadsheet
oh this is a cool and useful resource
ty for the mention
could be could be
This is what I meant, yeah.
There’s also an issue of “low probability” meaning fundamentally different things in the case of AI doom vs supervolcanoes.
P(supervolacano doom) > 0 is a frequentist statement. “We know from past observations that supervolcano doom happens with some (low) frequency.” This is a fact about the territory.
P(AI doom) > 0 is a Bayesian statement. “Given our current state of knowledge, it’s possible we live in a world where AI doom happens.” This is a fact about our map. Maybe some proportion of technological civilisations do in fact get exterminated by AI. But maybe we’re just confused and there’s no way this could ever actually happen.
I have a masters degree in machine learning and I’ve been thinking a lot about this for like 6 years, and here’s how it looks to me:
AI is playing out in a totally different way to the doomy scenarios Bostrom and Yudkowsky warned about
AI doomers tend to hang out together and reinforce each other’s extreme views
I think rationalists and EAs can easily have their whole lives nerd-sniped by plausible but ultimately specious ideas
I don’t expect any radical discontinuities in the near-term future. The world will broadly continue as normal, only faster.
Some problems will get worse as they get faster. Some good things will get better as they get faster. Some things will get weirder in a way where it’s not clear if they’re better or worse.
Some bad stuff will probably happen. Bad stuff has always happened. So it goes.
It’s plausible humans will go extinct from AI. It’s also plausible humans will go extinct from supervolcanoes. So it goes.
I’m paralysed by the thought that I really can’t do anything about it.
IMO, a lot of people in the AI safety world are making a lot of preventable mistakes, and there’s a lot of value in making the scene more legible. If you’re a content writer, then honestly trying to understand what’s going on and communicating your evolving understanding is actually pretty valuable. Just write more posts like this.
What’s the theory of change?