might be worth defining RFP = request for proposal
Hamish McDoodles
My thinking was that Because they were doing influential research and brought in funding? FHI’s work seems significantly better than most academic philosophy, even by prestigious university standards.
But on reflection, yes, obviously Oxford University will bring more prestige to anything it touches.
Why are people pressing the “disagree” button? Do they disagree with the idea that FHI brought prestige? Do they disagree with the framing? Is it because I have a silly username?
Clearly there’s some politics going on here, but I have no idea who the factions are or why.
Someone help me out?
Why was relationship management even necessary? Wasn’t FHI bringing prestige and funding to the university? Aren’t the incentives pretty well aligned?
I’m also confused by this. Did Oxford think it was a reputation risk? Were the other philosophers jealous of the attention and funding FHI got? Was a beaurocratic parasitic egregore putting up roadblocks to siphon off money to itself? Garden variety incompetence?
oh hmm.
Looks like https://www.nonlinear.org/network.html doesn’t throw that error. Will report this back to them.
Interactive AI Governance Map
oh hey
cool to see people are finding this useful
wanted the post to focus specifically upon how difficult it seems to avoid the conclusion of prioritizing animal welfare in neartermism
I wasn’t familiar with these other calculations you mention. I thought you were just relying on the RP studies which seemed flimsy. This extra context makes the case much stronger.
Sadly, I don’t think that approach is correct. The 5th percentile of a product of random variables is not the product of the 5th percentiles—in fact, in general, it’s going to be a product of much higher percentiles (20+).
I don’t think that’s true either.
If you’re multiplying noramlly distributed distributions, the general rule is that you add the percentage variances in quadrature.
Which I don’t think converges to a specific percentile like 20+. As more and more uncertainties cancel out the relative contribution of any given uncertainty goes to zero.
IDK. I did explicitly say that my calculation wasn’t correct. And with the information on hand I can’t see how I could’ve done better. Maybe I should’ve fudged it down by one OOD.
Thanks for responding to my hot takes with patience and good humour!
Your defenses and caveats all sound very reasonable.
the relevant vertebrates are probably within an OOM of humans
So given this, you’d agree with the conclusion of the original piece? At least if we take the “number of chickens affected per dollar” input as correct?
FYI, I made a spreadsheet a while ago which automatically pulls the latest OP grants data and constructs summaries and pivot tables to make this type of analysis easier.
I also made these interactive plots which summarise all EA funding:
- Key takeaways from our EA and alignment research surveys by 3 May 2024 18:10 UTC; 105 points) (LessWrong;
- 23 Nov 2023 22:06 UTC; 70 points) 's comment on Open Phil Should Allocate Most Neartermist Funding to Animal Welfare by (
- Key takeaways from our EA and alignment research surveys by 4 May 2024 15:51 UTC; 64 points) (
- 7 Oct 2024 19:42 UTC; 10 points) 's comment on Discussion thread: Animal Welfare vs. Global Health Debate Week by (
- 8 Oct 2024 11:33 UTC; 4 points) 's comment on Discussion thread: Animal Welfare vs. Global Health Debate Week by (
- 20 Mar 2024 6:33 UTC; 3 points) 's comment on EA “Worldviews” Need Rethinking by (
As I see it, we basically have a choice between:
simple methodology to make vaguely plausible guesses about the unknowable phenomology of chickens (cortical neuron count)
complex methodology to make vaguely plausible guesses about the unknowable phenomology of chickens (other stuff)
I much prefer the simple methodology where we can clearly see what assumptions we’re making and how that propagates out.
by that logic, two chickens have the same moral weight as one chicken because they have the same functions and capacities, no?
oh true lol
ok, animal charities still come out an order of magnitude ahead of human charities given the cage-free campaigns analysis and neuron counts
but the broader point is that the RP analyses seem far from conclusive and it would be silly to use them unilaterally for making huge funding allocation decisions, which I think still stands
- The Marginal $100m Would Be Far Better Spent on Animal Welfare Than Global Health by 9 Oct 2024 20:45 UTC; 79 points) (
- 8 Oct 2024 0:17 UTC; 11 points) 's comment on Discussion thread: Animal Welfare vs. Global Health Debate Week by (
- 10 Oct 2024 17:43 UTC; 5 points) 's comment on Discussion thread: Animal Welfare vs. Global Health Debate Week by (
RP’s moral weights and analysis of cage-free campaigns suggest that the average cost-effectiveness of cage-free campaigns is on the order of 1000x that of GiveWell’s top charities.[5] Even if the campaigns’ marginal cost-effectiveness is 10x worse than the average, that would be 100x.
This seems to be the key claim of the piece, so why isn’t the “1000x” calculation actually spelled out?
The “cage-free campaigns analysis” estimates
how many chickens will be affected by corporate cage-free and broiler welfare commitments won by all charities, in all countries, during all the years between 2005 and the end of 2018
This analysis gives chicken years affected per dollar as 9.6-120 (95%CI), with 41 as the median estimate.
The moral weights analysis estimates “welfare ranges”, ie, the difference in moral value between the best possible and worst possible experience for a given species. This doesn’t actually tell us anything about the disutility of caging chickens. For that you would need to make up some additional numbers:
Welfare ranges allow us to convert species-relative welfare assessments, understood as percentage changes in the portions of animals’ welfare ranges, into a common unit. To illustrate, let’s make the following assumptions:
Chickens’ welfare range is 10% of humans’ welfare range.
Over the course of a year, the average chicken is about half as badly off as they could be in conventional cages (they’re at the ~50% mark in the negative portion of their welfare range).
Over the course of a year, the average chicken is about a quarter as badly off as they could be in a cage-free system (they’re at the ~25% mark in the negative portion of their welfare range).
Anyway, the 95%CI for chicken welfare ranges (as a fraction of human ranges) is 0.002-0.869, with 0.332 as the median estimate.
So if we make the additional assumptions that:
All future animal welfare interventions will be as effective as past efforts (which seems implausible given diminishing marginal returns)
Cages cause chickens to lose half of their average welfare (a totally made up number)
Then we can multiply these out to get:
The “DALYs / $ through GiveWell charities” comes from the fact that it costs ~$5000 to save the life of a child. Assming “save a life” means adding ~50 years to the lifespan, that means $100 / DALY, or 0.01 DALYs / $.
A few things to note here:
There is huge uncertainty here. The 95% CI in the table indicates that chicken interventions could be anywhere from 10,000x to 0.1x as effective as human charities. (Although I got the 5th and 95th percentiles of the output by simply multiplying the 5th and 95th percentiles of the inputs. This is not correct, but I’m not sure there’s a better approach without more information about the input distributions.)
To get these estimates we had to make some implausible assumptions and also totally make up some numbers.
The old cortical neuron count proxy for moral weight says that one chicken life year is worth 0.003, which is 1/100th of the RP welfare range estimate of 0.33. This number would mean chicken interventions are only 0.7x as effective as human interventions, rather than 700x as effective. [edit: oops, maths wrong here. see Michael’s comment below.]
But didn’t RP prove that cortical neuron counts are fake?
Hardly. They gave a bunch of reasons why we might be skeptical of neuron count (summarised here). But I think the reasons in favour of using cortical neuron count as a proxy for moral weight are stronger than the objections. And that still doesn’t give us any reason to think RP’s has a better methodology for calculating moral weights. It just tells us to not take cortical counts to literally.
Points in favour of cortical neuron counts as a proxy for moral weight:
Neuron counts correlate with our intuitions of moral weights. Cortical counts would say that ~300 chicken life years are morally equivalent to one human life year, which sounds about right.
There’s a common sense story of: more neurons → more compute power → more consciousness.
It’s a simple and practical approach. Obtaining the moral weight of an arbitrary animal only requires counting neurons.
Compare with the RP moral weights:
If we interpret the welfare ranges as moral weights, then 3 chicken life years are worth one human life year. This is not a trade I would make.
If we don’t interpret welfare ranges as moral weights, then the RP numbers tell us literally nothing.
The methodology is complex, difficult to understand, expensive, and requires reams zoological observation to be applied to new animals.
And let’s not forget second order effects. Raising people out of poverty can increase global innovation and specialisation and accelerate economic development which could have benefits centuries from now. It’s not obvious that helping chickens has any real second order effects.
In conclusion:
It’s not obvious to me that RP’s research actually tells us anything useful about the effectiveness of animal charities compared to human charities.
There are tons of assumptions and simplifications that go into these RP numbers, so any conclusions we can draw must be low confidence.
Cortical neuron counts still looks like a pretty good way to compare welfare across species. Under cortical neuron count, human charities come out on top.
I am much more interested in a discussion of what Nonlinear should or shouldn’t do than for Catherine Richardson from Putney to worry if she she is spending too much money on rent
lol
Beaut. Thanks for the detailed feedback!
I think these suggestions make sense to implement immediately:
add boilerplate disclaimer about accuracy / fabrication
links to author pages
note on reading time
group by tags
“Lesswrong” → “LessWrong”
The summaries are in fact generated within a Google sheet, so it does make sense to add a link to that
These things will require a bit of experimentation but are good suggestions:
Agree on the tone being boring. I can think of a couple of fixes:
Prompt GPT to be more succinct to get rid of low information nonsense
Prompt GPT to do bulletpoints rather than paragraphs
Generate little poems to introduce sections
Think about cross pollinating with Type III Audio
should be fixed for the next issue
I don’t understand this sentance. The value on the table is good ideas that don’t get realised because they’re poorly communicated?