Naaaah, seems cheems. Seems worth trying. If we can’t then fair enough. But it doesn’t feel to me like we’ve tried.
Edit, for specificity. I think that shrimp QALYs and human QALYs have some exchange rate, we just don’t have a good handle on it yet. And I think that if we’d decided that difficult things weren’t worth doing we wouldn’t have done a lot of the things we’ve already done.
Oh, this is nice to read as I agree that we might be able to get some reasonable enough answers about Shrimp Welfare Project vs AMF (e.g. RP’s moral weights project).
Some rough thoughts: It’s when we get to comparing Shrimp Welfare Project to AI safety PACs in the US that I think the task goes from crazy hard but worth it to maybe too gargantuan a task (although some have tried). I also think here the uncertainty is so large that it’s harder to defer to experts in the way that one can defer to GiveWell if they care about helping the world’s poorest people alive today.
But I do agree that people need a way to decide, and Anthropic staff are incredibly time poor and some of these interventions are very time sensitive if you have short timelines, so that just begs the question: if I’m recommending worldview diversification, which cause areas get attention and how do we split among them?
I am legitimately very interested in thoughtful quantitative ways of going about this (my job involves a non-zero amount of advising Anthropic folks).
Right now, it seems like Rethink Priorities is the only group doing this in public (e.g. here). To be honest, I find their work has gone over my heard, and while I don’t want to speak for them my understanding is they might be doing more in this space soon.
I [Nathan] think that shrimp QALYs and human QALYs have some exchange rate, we just don’t have a good handle on it yet.
I think being able to compare the welfare of shrimps and humans is far enough. I do not know about any interventions which robustly increase welfare in expectation due to dominant uncertain effects on soil animals. I would be curious to know your thoughts on these.
Oh, this [the point from Nathan quoted above] is nice to read as I agree that we might be able to get some reasonable enough answers about Shrimp Welfare Project vs AMF (e.g. RP’s moral weights project).
I believe there is a very long way to robust results from Rethink Priorities’ (RP’s) moral weight project, and Bob Fischer’s book about comparing welfare across species, which contains what RP stands behind now. For example, the estimate in Bob’s book for the welfare range of shrimps is 8.0 % that of humans, but I would say it would be quite reasonable for someone to have a best guess of 10^-6, the ratio between the number of neurons of shrimps and humans.
I think that shrimp QALYs and human QALYs have some exchange rate, we just don’t have a good handle on it yet. And I think that if we’d decided that difficult things weren’t worth doing we wouldn’t have done a lot of the things we’ve already done.
100 years of progress in the science and philosophy of consciousness should settle it. Start by reading a few books on the subject a year for a few years.
I recommend starting with Consciousness Explained by Daniel Dennett. It’s one of my favourite books.
Professional philosophers don’t even agree on whether dualism, eliminativism, functionalism, identity theory, or panpsychism is true, despite decades of scholarship, so don’t expect to quickly find a consensus on finer-grained questions like how to quantify and compare shrimp consciousness (and whether it exists in the first place) to human consciousness. Even if you can form your own view to your own satisfaction within a year, it’s unlikely that you’ll convince many others that you’re right.
On the other hand, you might succeed where thousands of others haven’t, and become hailed as one of the greatest living philosophers/scientists.
Naaaah, seems cheems. Seems worth trying. If we can’t then fair enough. But it doesn’t feel to me like we’ve tried.
Edit, for specificity. I think that shrimp QALYs and human QALYs have some exchange rate, we just don’t have a good handle on it yet. And I think that if we’d decided that difficult things weren’t worth doing we wouldn’t have done a lot of the things we’ve already done.
Also, hey Elliot, I hope you’re doing well.
Oh, this is nice to read as I agree that we might be able to get some reasonable enough answers about Shrimp Welfare Project vs AMF (e.g. RP’s moral weights project).
Some rough thoughts: It’s when we get to comparing Shrimp Welfare Project to AI safety PACs in the US that I think the task goes from crazy hard but worth it to maybe too gargantuan a task (although some have tried). I also think here the uncertainty is so large that it’s harder to defer to experts in the way that one can defer to GiveWell if they care about helping the world’s poorest people alive today.
But I do agree that people need a way to decide, and Anthropic staff are incredibly time poor and some of these interventions are very time sensitive if you have short timelines, so that just begs the question: if I’m recommending worldview diversification, which cause areas get attention and how do we split among them?
I am legitimately very interested in thoughtful quantitative ways of going about this (my job involves a non-zero amount of advising Anthropic folks). Right now, it seems like Rethink Priorities is the only group doing this in public (e.g. here). To be honest, I find their work has gone over my heard, and while I don’t want to speak for them my understanding is they might be doing more in this space soon.
Hi Elliot and Nathan.
I think being able to compare the welfare of shrimps and humans is far enough. I do not know about any interventions which robustly increase welfare in expectation due to dominant uncertain effects on soil animals. I would be curious to know your thoughts on these.
I believe there is a very long way to robust results from Rethink Priorities’ (RP’s) moral weight project, and Bob Fischer’s book about comparing welfare across species, which contains what RP stands behind now. For example, the estimate in Bob’s book for the welfare range of shrimps is 8.0 % that of humans, but I would say it would be quite reasonable for someone to have a best guess of 10^-6, the ratio between the number of neurons of shrimps and humans.
When people write about where they donate, aren’t they implicitly giving a ranking?
Sure but a really illegible and hard to search one.
100 years of progress in the science and philosophy of consciousness should settle it. Start by reading a few books on the subject a year for a few years.
I recommend starting with Consciousness Explained by Daniel Dennett. It’s one of my favourite books.
Professional philosophers don’t even agree on whether dualism, eliminativism, functionalism, identity theory, or panpsychism is true, despite decades of scholarship, so don’t expect to quickly find a consensus on finer-grained questions like how to quantify and compare shrimp consciousness (and whether it exists in the first place) to human consciousness. Even if you can form your own view to your own satisfaction within a year, it’s unlikely that you’ll convince many others that you’re right.
On the other hand, you might succeed where thousands of others haven’t, and become hailed as one of the greatest living philosophers/scientists.