Maybe I’m being too facile here, but I genuinely think that even just taking all these numbers, making them visible in some place, and then taking the median of them, and giving a ranking according to that, and then allowing people to find things they think are perverse within that ranking, would be a pretty solid start.
I think producing suspect work is often the precursor to producing good work.
And I think there’s enough estimates that one could produce a thing which just gathers all the estimates up and displays them. That would be sort of a survey or something, which wouldn’t therefore make it bad in itself even if the answers were sort of universally agreed to be pretty dubious. But I think it would point to the underlying work which needs to be done more.
I think one of the challenges here is for the people who are respected/have a leadershipy role on cause prioritisation, I get the sense that they’ve been reluctant to weigh in here, perhaps to the detriment of Anthropic folks trying to make a decision one way or another.
Even more speculative: Maybe part of what’s going on here is that the charity comparison numbers that GiveWell produce, or when charities are being compared within a cause area in general, is one level of crazy and difficult. But the moment you get to cross-course comparisons, these numbers become several orders of magnitude more crazy and uncertain. And maybe there’s a reluctance to use the same methodology for something so much more uncertain, because it’s a less useful tool/there’s a risk it is perceived as something more solid than it is.
Overal I think more people who have insights on cause prio should be saying: if I had a billion dollars, here’s how I’d spend it, and why.
Overal I think more people who have insights on cause prio should be saying: if I had a billion dollars, here’s how I’d spend it, and why.
I see some value in this. However, I would be much more interested in how they would decrease the uncertainty about cause prioritisation, which is super large. I would spend at least 1 %, 10 M$ (= 0.01*1*10^9), decreasing the uncertainty about comparisons of expected hedonistic welfare across species and substrates (biological or not). Relatedly, RP has a research agenda about interspecies welfare comparisons more broadly (not just under expectationaltotalhedonisticutilitarianism).
I definitely think this should happen too, but reducing uncertainty about cause prio beyond what has already been done to date is a much much bigger and harder ask than ‘share your best guess of how you would allocate a billion dollars’.
How different is that from ranking the results from RP’s cross-cause cost-effectiveness model (CCM)? I collected estimates from this in a comment 2 years ago.
Maybe I’m being too facile here, but I genuinely think that even just taking all these numbers, making them visible in some place, and then taking the median of them, and giving a ranking according to that, and then allowing people to find things they think are perverse within that ranking, would be a pretty solid start.
I think producing suspect work is often the precursor to producing good work.
And I think there’s enough estimates that one could produce a thing which just gathers all the estimates up and displays them. That would be sort of a survey or something, which wouldn’t therefore make it bad in itself even if the answers were sort of universally agreed to be pretty dubious. But I think it would point to the underlying work which needs to be done more.
I think one of the challenges here is for the people who are respected/have a leadershipy role on cause prioritisation, I get the sense that they’ve been reluctant to weigh in here, perhaps to the detriment of Anthropic folks trying to make a decision one way or another.
Even more speculative: Maybe part of what’s going on here is that the charity comparison numbers that GiveWell produce, or when charities are being compared within a cause area in general, is one level of crazy and difficult. But the moment you get to cross-course comparisons, these numbers become several orders of magnitude more crazy and uncertain. And maybe there’s a reluctance to use the same methodology for something so much more uncertain, because it’s a less useful tool/there’s a risk it is perceived as something more solid than it is.
Overal I think more people who have insights on cause prio should be saying: if I had a billion dollars, here’s how I’d spend it, and why.
I see some value in this. However, I would be much more interested in how they would decrease the uncertainty about cause prioritisation, which is super large. I would spend at least 1 %, 10 M$ (= 0.01*1*10^9), decreasing the uncertainty about comparisons of expected hedonistic welfare across species and substrates (biological or not). Relatedly, RP has a research agenda about interspecies welfare comparisons more broadly (not just under expectationaltotal hedonistic utilitarianism).
I definitely think this should happen too, but reducing uncertainty about cause prio beyond what has already been done to date is a much much bigger and harder ask than ‘share your best guess of how you would allocate a billion dollars’.
How different is that from ranking the results from RP’s cross-cause cost-effectiveness model (CCM)? I collected estimates from this in a comment 2 years ago.