Thanks for the feedback everyone. Lots of recurring themes, so I’ll address them partly here.
The main point is this; the end market is not Effective Altruists. I don’t think it’s very likely at all that adding too much complexity for the sake of accuracy, at least on the front end, will result in any meaningful reduction in animal suffering. The point is not to be deceitful or to bias people, but simply to maximise the reduction in animal suffering.
As someone said at the EA Global 2015 conference in Melbourne, “Sometimes the best way to be a utilitarian is to pretend to not be a utilitarian”, which I loosely take to mean that we should sometimes drop our perceived moral or analytical rigour in order to actually do more good.
Perhaps there could be two versions; one which is completely rigorous, contains elements of x-risk (as some people have suggested) and is targeted at existing EAs, while the other is targeted at the broader public.
On a related note, I’m yet to do the calculation but I’m of the mind that current estimates on animal welfare charities are actually underestimates as they don’t factor in the long-run benefits of reducing the proportion of humanity that relies on subjecting animals to suffering. The earlier a society that doesn’t inflict suffering on animals is brought about, the fewer future animals will suffer. I find this tends not to be even mentioned when people compare animal welfare orgs to x-risk orgs.
But I’m very open to continuing this discussion. As I’ve said these are early days for what was an idea I wanted to get in the public space.
I think you’re conflating a couple of different dimensions: degree of complexity, and degree of rigour.
These two are linked: there are some aspects that it’s hard to be rigourous about without a certain level of complexity. But it can also be more work to make a more complex model rigourous, because you need to be careful about more different moving parts.
I think for a calculator like this you should be aiming for low complexity and high rigour. Adding more questions or complicated arguments could put people off. But making elementary mistakes or sleights-of-hand in conversion makes it easier to attack (and people will try to attack it) and dismiss. So keep the number of questions small—addressing existential risk definitely looks like a mistake to me—but try to make them the most appropriate ones, and keep the language precise. This recent post on depicting poverty and Josh’s comment there have some good discussion of what kind of language will avoid pushback.
Thanks for the feedback everyone. Lots of recurring themes, so I’ll address them partly here.
The main point is this; the end market is not Effective Altruists. I don’t think it’s very likely at all that adding too much complexity for the sake of accuracy, at least on the front end, will result in any meaningful reduction in animal suffering. The point is not to be deceitful or to bias people, but simply to maximise the reduction in animal suffering.
As someone said at the EA Global 2015 conference in Melbourne, “Sometimes the best way to be a utilitarian is to pretend to not be a utilitarian”, which I loosely take to mean that we should sometimes drop our perceived moral or analytical rigour in order to actually do more good.
Perhaps there could be two versions; one which is completely rigorous, contains elements of x-risk (as some people have suggested) and is targeted at existing EAs, while the other is targeted at the broader public.
On a related note, I’m yet to do the calculation but I’m of the mind that current estimates on animal welfare charities are actually underestimates as they don’t factor in the long-run benefits of reducing the proportion of humanity that relies on subjecting animals to suffering. The earlier a society that doesn’t inflict suffering on animals is brought about, the fewer future animals will suffer. I find this tends not to be even mentioned when people compare animal welfare orgs to x-risk orgs.
But I’m very open to continuing this discussion. As I’ve said these are early days for what was an idea I wanted to get in the public space.
I think you’re conflating a couple of different dimensions: degree of complexity, and degree of rigour.
These two are linked: there are some aspects that it’s hard to be rigourous about without a certain level of complexity. But it can also be more work to make a more complex model rigourous, because you need to be careful about more different moving parts.
I think for a calculator like this you should be aiming for low complexity and high rigour. Adding more questions or complicated arguments could put people off. But making elementary mistakes or sleights-of-hand in conversion makes it easier to attack (and people will try to attack it) and dismiss. So keep the number of questions small—addressing existential risk definitely looks like a mistake to me—but try to make them the most appropriate ones, and keep the language precise. This recent post on depicting poverty and Josh’s comment there have some good discussion of what kind of language will avoid pushback.
Great comment, you’ve convinced me. Thanks for the link as well, it looks interesting.