I [Nathan] think that shrimp QALYs and human QALYs have some exchange rate, we just don’t have a good handle on it yet.
I think being able to compare the welfare of shrimps and humans is far from enough. I do not know about any interventions which robustly increase welfare in expectation due to dominant uncertain effects on soil animals. I would be curious to know your thoughts on these.
Oh, this [the point from Nathan quoted above] is nice to read as I agree that we might be able to get some reasonable enough answers about Shrimp Welfare Project vs AMF (e.g. RP’s moral weights project).
I believe there is a very long way to robust results from Rethink Priorities’ (RP’s) moral weight project, and Bob Fischer’s book about comparing welfare across species, which contains what RP stands behind now. For example, the estimate in Bob’s book for the welfare range of shrimps is 8.0 % that of humans, but I would say it would be quite reasonable for someone to have a best guess of 10^-6, the ratio between the number of neurons of shrimps and humans.
Maybe I’m being too facile here, but I genuinely think that even just taking all these numbers, making them visible in some place, and then taking the median of them, and giving a ranking according to that, and then allowing people to find things they think are perverse within that ranking, would be a pretty solid start.
I think producing suspect work is often the precursor to producing good work.
And I think there’s enough estimates that one could produce a thing which just gathers all the estimates up and displays them. That would be sort of a survey or something, which wouldn’t therefore make it bad in itself even if the answers were sort of universally agreed to be pretty dubious. But I think it would point to the underlying work which needs to be done more.
I think one of the challenges here is for the people who are respected/​have a leadershipy role on cause prioritisation, I get the sense that they’ve been reluctant to weigh in here, perhaps to the detriment of Anthropic folks trying to make a decision one way or another.
Even more speculative: Maybe part of what’s going on here is that the charity comparison numbers that GiveWell produce, or when charities are being compared within a cause area in general, is one level of crazy and difficult. But the moment you get to cross-course comparisons, these numbers become several orders of magnitude more crazy and uncertain. And maybe there’s a reluctance to use the same methodology for something so much more uncertain, because it’s a less useful tool/​there’s a risk it is perceived as something more solid than it is.
Overal I think more people who have insights on cause prio should be saying: if I had a billion dollars, here’s how I’d spend it, and why.
Overal I think more people who have insights on cause prio should be saying: if I had a billion dollars, here’s how I’d spend it, and why.
I see some value in this. However, I would be much more interested in how they would decrease the uncertainty about cause prioritisation, which is super large. I would spend at least 1 %, 10 M$ (= 0.01*1*10^9), decreasing the uncertainty about comparisons of expected hedonistic welfare across species and substrates (biological or not). Relatedly, RP has a research agenda about interspecies welfare comparisons more broadly (not just under expectationaltotalhedonisticutilitarianism).
I definitely think this should happen too, but reducing uncertainty about cause prio beyond what has already been done to date is a much much bigger and harder ask than ‘share your best guess of how you would allocate a billion dollars’.
How different is that from ranking the results from RP’s cross-cause cost-effectiveness model (CCM)? I collected estimates from this in a comment 2 years ago.
Hi Elliot and Nathan.
I think being able to compare the welfare of shrimps and humans is far from enough. I do not know about any interventions which robustly increase welfare in expectation due to dominant uncertain effects on soil animals. I would be curious to know your thoughts on these.
I believe there is a very long way to robust results from Rethink Priorities’ (RP’s) moral weight project, and Bob Fischer’s book about comparing welfare across species, which contains what RP stands behind now. For example, the estimate in Bob’s book for the welfare range of shrimps is 8.0 % that of humans, but I would say it would be quite reasonable for someone to have a best guess of 10^-6, the ratio between the number of neurons of shrimps and humans.
Maybe I’m being too facile here, but I genuinely think that even just taking all these numbers, making them visible in some place, and then taking the median of them, and giving a ranking according to that, and then allowing people to find things they think are perverse within that ranking, would be a pretty solid start.
I think producing suspect work is often the precursor to producing good work.
And I think there’s enough estimates that one could produce a thing which just gathers all the estimates up and displays them. That would be sort of a survey or something, which wouldn’t therefore make it bad in itself even if the answers were sort of universally agreed to be pretty dubious. But I think it would point to the underlying work which needs to be done more.
I think one of the challenges here is for the people who are respected/​have a leadershipy role on cause prioritisation, I get the sense that they’ve been reluctant to weigh in here, perhaps to the detriment of Anthropic folks trying to make a decision one way or another.
Even more speculative: Maybe part of what’s going on here is that the charity comparison numbers that GiveWell produce, or when charities are being compared within a cause area in general, is one level of crazy and difficult. But the moment you get to cross-course comparisons, these numbers become several orders of magnitude more crazy and uncertain. And maybe there’s a reluctance to use the same methodology for something so much more uncertain, because it’s a less useful tool/​there’s a risk it is perceived as something more solid than it is.
Overal I think more people who have insights on cause prio should be saying: if I had a billion dollars, here’s how I’d spend it, and why.
I see some value in this. However, I would be much more interested in how they would decrease the uncertainty about cause prioritisation, which is super large. I would spend at least 1 %, 10 M$ (= 0.01*1*10^9), decreasing the uncertainty about comparisons of expected hedonistic welfare across species and substrates (biological or not). Relatedly, RP has a research agenda about interspecies welfare comparisons more broadly (not just under expectationaltotal hedonistic utilitarianism).
I definitely think this should happen too, but reducing uncertainty about cause prio beyond what has already been done to date is a much much bigger and harder ask than ‘share your best guess of how you would allocate a billion dollars’.
How different is that from ranking the results from RP’s cross-cause cost-effectiveness model (CCM)? I collected estimates from this in a comment 2 years ago.