This is not really an argument to either side, but a while ago I created a rough little spreadsheet where you can put in:
- How much disvalue you see in the world going badly for animals vs. humans - How likely you think it is that the world will go badly for animals vs. humans vs. both - How much of the world that makes AI go well for humans you expect to also help make AI go well for animals
And it calculates for you what you should focus on (AIS vs. AIxAnimals) :)
It’s very rough, very proxy, all the usual caveats apply. But I am hoping that it can with some intuitions about implications to whatever stance you have after (!) the debate week. Feel free to make a copy and tweak for your own use!
I’m not sure how to interpret the “Cost” lines. Is it supposed to be the negation of utility? And therefore “Cost of World C (Good for Humans + Good for Animals)” should be a negative number, because it has positive utility?
Yes, “cost” is the negation of utility, and the whole thing is anchored against 0 (so, the world where everything goes well is the baseline, 0, and it only calculates how bad it would be for something to go wrong). There is definitely a more elaborate version of this where you differentiate between more possible worlds that go badly, neutrally, or well for humans and/or animals and involve negative and positive numbers—not sure how much that would realistically change, cause prio wise.
This is not really an argument to either side, but a while ago I created a rough little spreadsheet where you can put in:
- How much disvalue you see in the world going badly for animals vs. humans
- How likely you think it is that the world will go badly for animals vs. humans vs. both
- How much of the world that makes AI go well for humans you expect to also help make AI go well for animals
And it calculates for you what you should focus on (AIS vs. AIxAnimals) :)
It’s very rough, very proxy, all the usual caveats apply. But I am hoping that it can with some intuitions about implications to whatever stance you have after (!) the debate week. Feel free to make a copy and tweak for your own use!
I’m not sure how to interpret the “Cost” lines. Is it supposed to be the negation of utility? And therefore “Cost of World C (Good for Humans + Good for Animals)” should be a negative number, because it has positive utility?
Yes, “cost” is the negation of utility, and the whole thing is anchored against 0 (so, the world where everything goes well is the baseline, 0, and it only calculates how bad it would be for something to go wrong). There is definitely a more elaborate version of this where you differentiate between more possible worlds that go badly, neutrally, or well for humans and/or animals and involve negative and positive numbers—not sure how much that would realistically change, cause prio wise.