I think the moment you try and compare charities across causes, especially for the ones that have harder-to-evaluate assumptions like global catastrophic risk and animal welfare, it very quickly becomes clear how impossibly crazy any solid numbers are, and how much they rest on uncertain philosophical assumptions, and how wide the error margins are. I think at that point you’re either left with worldview diversification or some incredibly complex, as-yet-not-very-well-settled, cause prioritisation.
My understanding is that all of the EA high net worth donor advisors like Longview, GiveWell, Coefficient Giving, (the org I work at) Senterra Funders, and many others are able to pitch their various offers to folks in Anthropic.
What has been missing is some recommended course prio split and/or resources, but that some orgs are starting to work on this now.
I think that any way to systematise this, where you complete a quiz and it gives you an answer, is too superficial to be useful. High net worth funders need to decide for themselves whether or not they trust specific grant makers beyond whether or not those grant makers are aligned with their values on paper.
Naaaah, seems cheems. Seems worth trying. If we can’t then fair enough. But it doesn’t feel to me like we’ve tried.
Edit, for specificity. I think that shrimp QALYs and human QALYs have some exchange rate, we just don’t have a good handle on it yet. And I think that if we’d decided that difficult things weren’t worth doing we wouldn’t have done a lot of the things we’ve already done.
Oh, this is nice to read as I agree that we might be able to get some reasonable enough answers about Shrimp Welfare Project vs AMF (e.g. RP’s moral weights project).
Some rough thoughts: It’s when we get to comparing Shrimp Welfare Project to AI safety PACs in the US that I think the task goes from crazy hard but worth it to maybe too gargantuan a task (although some have tried). I also think here the uncertainty is so large that it’s harder to defer to experts in the way that one can defer to GiveWell if they care about helping the world’s poorest people alive today.
But I do agree that people need a way to decide, and Anthropic staff are incredibly time poor and some of these interventions are very time sensitive if you have short timelines, so that just begs the question: if I’m recommending worldview diversification, which cause areas get attention and how do we split among them?
I am legitimately very interested in thoughtful quantitative ways of going about this (my job involves a non-zero amount of advising Anthropic folks).
Right now, it seems like Rethink Priorities is the only group doing this in public (e.g. here). To be honest, I find their work has gone over my heard, and while I don’t want to speak for them my understanding is they might be doing more in this space soon.
I [Nathan] think that shrimp QALYs and human QALYs have some exchange rate, we just don’t have a good handle on it yet.
I think being able to compare the welfare of shrimps and humans is far from enough. I do not know about any interventions which robustly increase welfare in expectation due to dominant uncertain effects on soil animals. I would be curious to know your thoughts on these.
Oh, this [the point from Nathan quoted above] is nice to read as I agree that we might be able to get some reasonable enough answers about Shrimp Welfare Project vs AMF (e.g. RP’s moral weights project).
I believe there is a very long way to robust results from Rethink Priorities’ (RP’s) moral weight project, and Bob Fischer’s book about comparing welfare across species, which contains what RP stands behind now. For example, the estimate in Bob’s book for the welfare range of shrimps is 8.0 % that of humans, but I would say it would be quite reasonable for someone to have a best guess of 10^-6, the ratio between the number of neurons of shrimps and humans.
Maybe I’m being too facile here, but I genuinely think that even just taking all these numbers, making them visible in some place, and then taking the median of them, and giving a ranking according to that, and then allowing people to find things they think are perverse within that ranking, would be a pretty solid start.
I think producing suspect work is often the precursor to producing good work.
And I think there’s enough estimates that one could produce a thing which just gathers all the estimates up and displays them. That would be sort of a survey or something, which wouldn’t therefore make it bad in itself even if the answers were sort of universally agreed to be pretty dubious. But I think it would point to the underlying work which needs to be done more.
I think one of the challenges here is for the people who are respected/have a leadershipy role on cause prioritisation, I get the sense that they’ve been reluctant to weigh in here, perhaps to the detriment of Anthropic folks trying to make a decision one way or another.
Even more speculative: Maybe part of what’s going on here is that the charity comparison numbers that GiveWell produce, or when charities are being compared within a cause area in general, is one level of crazy and difficult. But the moment you get to cross-course comparisons, these numbers become several orders of magnitude more crazy and uncertain. And maybe there’s a reluctance to use the same methodology for something so much more uncertain, because it’s a less useful tool/there’s a risk it is perceived as something more solid than it is.
Overal I think more people who have insights on cause prio should be saying: if I had a billion dollars, here’s how I’d spend it, and why.
Overal I think more people who have insights on cause prio should be saying: if I had a billion dollars, here’s how I’d spend it, and why.
I see some value in this. However, I would be much more interested in how they would decrease the uncertainty about cause prioritisation, which is super large. I would spend at least 1 %, 10 M$ (= 0.01*1*10^9), decreasing the uncertainty about comparisons of expected hedonistic welfare across species and substrates (biological or not). Relatedly, RP has a research agenda about interspecies welfare comparisons more broadly (not just under expectationaltotalhedonisticutilitarianism).
I definitely think this should happen too, but reducing uncertainty about cause prio beyond what has already been done to date is a much much bigger and harder ask than ‘share your best guess of how you would allocate a billion dollars’.
How different is that from ranking the results from RP’s cross-cause cost-effectiveness model (CCM)? I collected estimates from this in a comment 2 years ago.
I think that shrimp QALYs and human QALYs have some exchange rate, we just don’t have a good handle on it yet. And I think that if we’d decided that difficult things weren’t worth doing we wouldn’t have done a lot of the things we’ve already done.
100 years of progress in the science and philosophy of consciousness should settle it. Start by reading a few books on the subject a year for a few years.
I recommend starting with Consciousness Explained by Daniel Dennett. It’s one of my favourite books.
Professional philosophers don’t even agree on whether dualism, eliminativism, functionalism, identity theory, or panpsychism is true, despite decades of scholarship, so don’t expect to quickly find a consensus on finer-grained questions like how to quantify and compare shrimp consciousness (and whether it exists in the first place) to human consciousness. Even if you can form your own view to your own satisfaction within a year, it’s unlikely that you’ll convince many others that you’re right.
On the other hand, you might succeed where thousands of others haven’t, and become hailed as one of the greatest living philosophers/scientists.
I think the moment you try and compare charities across causes, especially for the ones that have harder-to-evaluate assumptions like global catastrophic risk and animal welfare, it very quickly becomes clear how impossibly crazy any solid numbers are, and how much they rest on uncertain philosophical assumptions, and how wide the error margins are. I think at that point you’re either left with worldview diversification or some incredibly complex, as-yet-not-very-well-settled, cause prioritisation.
My understanding is that all of the EA high net worth donor advisors like Longview, GiveWell, Coefficient Giving, (the org I work at) Senterra Funders, and many others are able to pitch their various offers to folks in Anthropic.
What has been missing is some recommended course prio split and/or resources, but that some orgs are starting to work on this now.
I think that any way to systematise this, where you complete a quiz and it gives you an answer, is too superficial to be useful. High net worth funders need to decide for themselves whether or not they trust specific grant makers beyond whether or not those grant makers are aligned with their values on paper.
Naaaah, seems cheems. Seems worth trying. If we can’t then fair enough. But it doesn’t feel to me like we’ve tried.
Edit, for specificity. I think that shrimp QALYs and human QALYs have some exchange rate, we just don’t have a good handle on it yet. And I think that if we’d decided that difficult things weren’t worth doing we wouldn’t have done a lot of the things we’ve already done.
Also, hey Elliot, I hope you’re doing well.
Oh, this is nice to read as I agree that we might be able to get some reasonable enough answers about Shrimp Welfare Project vs AMF (e.g. RP’s moral weights project).
Some rough thoughts: It’s when we get to comparing Shrimp Welfare Project to AI safety PACs in the US that I think the task goes from crazy hard but worth it to maybe too gargantuan a task (although some have tried). I also think here the uncertainty is so large that it’s harder to defer to experts in the way that one can defer to GiveWell if they care about helping the world’s poorest people alive today.
But I do agree that people need a way to decide, and Anthropic staff are incredibly time poor and some of these interventions are very time sensitive if you have short timelines, so that just begs the question: if I’m recommending worldview diversification, which cause areas get attention and how do we split among them?
I am legitimately very interested in thoughtful quantitative ways of going about this (my job involves a non-zero amount of advising Anthropic folks). Right now, it seems like Rethink Priorities is the only group doing this in public (e.g. here). To be honest, I find their work has gone over my heard, and while I don’t want to speak for them my understanding is they might be doing more in this space soon.
Hi Elliot and Nathan.
I think being able to compare the welfare of shrimps and humans is far from enough. I do not know about any interventions which robustly increase welfare in expectation due to dominant uncertain effects on soil animals. I would be curious to know your thoughts on these.
I believe there is a very long way to robust results from Rethink Priorities’ (RP’s) moral weight project, and Bob Fischer’s book about comparing welfare across species, which contains what RP stands behind now. For example, the estimate in Bob’s book for the welfare range of shrimps is 8.0 % that of humans, but I would say it would be quite reasonable for someone to have a best guess of 10^-6, the ratio between the number of neurons of shrimps and humans.
Maybe I’m being too facile here, but I genuinely think that even just taking all these numbers, making them visible in some place, and then taking the median of them, and giving a ranking according to that, and then allowing people to find things they think are perverse within that ranking, would be a pretty solid start.
I think producing suspect work is often the precursor to producing good work.
And I think there’s enough estimates that one could produce a thing which just gathers all the estimates up and displays them. That would be sort of a survey or something, which wouldn’t therefore make it bad in itself even if the answers were sort of universally agreed to be pretty dubious. But I think it would point to the underlying work which needs to be done more.
I think one of the challenges here is for the people who are respected/have a leadershipy role on cause prioritisation, I get the sense that they’ve been reluctant to weigh in here, perhaps to the detriment of Anthropic folks trying to make a decision one way or another.
Even more speculative: Maybe part of what’s going on here is that the charity comparison numbers that GiveWell produce, or when charities are being compared within a cause area in general, is one level of crazy and difficult. But the moment you get to cross-course comparisons, these numbers become several orders of magnitude more crazy and uncertain. And maybe there’s a reluctance to use the same methodology for something so much more uncertain, because it’s a less useful tool/there’s a risk it is perceived as something more solid than it is.
Overal I think more people who have insights on cause prio should be saying: if I had a billion dollars, here’s how I’d spend it, and why.
I see some value in this. However, I would be much more interested in how they would decrease the uncertainty about cause prioritisation, which is super large. I would spend at least 1 %, 10 M$ (= 0.01*1*10^9), decreasing the uncertainty about comparisons of expected hedonistic welfare across species and substrates (biological or not). Relatedly, RP has a research agenda about interspecies welfare comparisons more broadly (not just under expectationaltotal hedonistic utilitarianism).
I definitely think this should happen too, but reducing uncertainty about cause prio beyond what has already been done to date is a much much bigger and harder ask than ‘share your best guess of how you would allocate a billion dollars’.
How different is that from ranking the results from RP’s cross-cause cost-effectiveness model (CCM)? I collected estimates from this in a comment 2 years ago.
When people write about where they donate, aren’t they implicitly giving a ranking?
Sure but a really illegible and hard to search one.
100 years of progress in the science and philosophy of consciousness should settle it. Start by reading a few books on the subject a year for a few years.
I recommend starting with Consciousness Explained by Daniel Dennett. It’s one of my favourite books.
Professional philosophers don’t even agree on whether dualism, eliminativism, functionalism, identity theory, or panpsychism is true, despite decades of scholarship, so don’t expect to quickly find a consensus on finer-grained questions like how to quantify and compare shrimp consciousness (and whether it exists in the first place) to human consciousness. Even if you can form your own view to your own satisfaction within a year, it’s unlikely that you’ll convince many others that you’re right.
On the other hand, you might succeed where thousands of others haven’t, and become hailed as one of the greatest living philosophers/scientists.
Hi this is the second or third of my comments you’ve come and snarked on. I’ll ask again. Have I upset you that you should talk to me like this?