We should expect the efficiency of farming to improve until no suffering is involved. (See the cellular agriculture/cultured meat projects.)
We should expect humans to change for the better.
It would be deliriously conservative to expect as many as 10 thousand years to pass before humans begin to improve their own minds, to live longer while retaining mental clarity and flexibility, to be aware of more, to be more as they wish to be, to reckon deeply with their essential, authentic values, to learn to live in accordance with them.
Even this deliriously conservative estimate of 10 thousand years would place the vast majority of future after this transition.
And after that transition, I would expect to see a very large portion of humanity realize that they have little tolerance for the suffering of other beings. Even if the proportion of humanity who weather uplifting and somehow remain indifferent to suffering is high, say, 30%, I’d expect the anti-suffering majority to buy all of their farms from them, few could relish suffering so much that we would not offer more, to halt it. Very little suffering would continue.
If you think that this will not be the case — that the deep values of the majority of humanity genuinely do not oppose suffering —… Then it is difficult to imagine a solution, or to argue that this even is a problem that a thing like EA can solve. At that point, it would be a military issue. Your campaign would no longer be about correcting errors, it would be about enforcing a specific morality upon a population who authentically don’t share it. You could try that. I’m not sure I would want to help. I currently think that I would help, but if it turns out that so much of humanity’s performance of compassion was feigned, I could no longer be confident that I would end up on that side of the border. I’m not even sure that you could be confident that you would remain on that side of the border.
I disagree here. Even though I think it’s more likely than not space factory farming won’t go on forever, it’s not impossible that it will stay, and the chance isn’t like vanishingly low. I wrote a post on it.
Also, for cause prioritization., we need to look at the expected values from the tail scenarios. Even if the chances could be as low as 0.5%, or 0.1%, the huge stake might mean the expected values could still be astronomical, which is what I argue for space factory farming. I think what we need to do is to prove why factory farming will go away in the near/mid future 100%, which I don’t see good arguments for.
For example, there is no proof that cellular agriculture is more energy and resource efficient than all kinds of factory farming. In fact, insect farming, and the raising of certain species of fish, are very efficient. Cellular agriculture also takes a lot of energy to go against entropy. This is especially true if the requirement for the alignment of protein structures is high. In terms of organizing things together against entropy, biological beings are actually quite efficient, and cellular agriculture might have a hard task to outperform all animal protein. There needs to be serious scientific research specifically addressing this issue, before we can claim that cellular agriculture will be more efficient in all possible ways.
On human becoming compassionate. I feel pessimistic about that, because here we are talking about moral circle expansion beyond our own species membership. Within species, whether it be women, people of color, elderly, children, LGBTQ, they all share very similar genes with dominant humans (which generally were white men, in history), neural structures (so that we can be sure that they suffer in similar ways), and we have shared natural languages. All these made it rather easy for dominant humans to understand dominated humans reasonable well. It won’t be the same for our treatment of nonhumans, such as nonhuman animals and digital minds without natural language capabilities.
I don’t see a way for it to go on forever.
We should expect the efficiency of farming to improve until no suffering is involved.
(See the cellular agriculture/cultured meat projects.)
We should expect humans to change for the better.
It would be deliriously conservative to expect as many as 10 thousand years to pass before humans begin to improve their own minds, to live longer while retaining mental clarity and flexibility, to be aware of more, to be more as they wish to be, to reckon deeply with their essential, authentic values, to learn to live in accordance with them.
Even this deliriously conservative estimate of 10 thousand years would place the vast majority of future after this transition.
And after that transition, I would expect to see a very large portion of humanity realize that they have little tolerance for the suffering of other beings.
Even if the proportion of humanity who weather uplifting and somehow remain indifferent to suffering is high, say, 30%, I’d expect the anti-suffering majority to buy all of their farms from them, few could relish suffering so much that we would not offer more, to halt it. Very little suffering would continue.
If you think that this will not be the case — that the deep values of the majority of humanity genuinely do not oppose suffering —… Then it is difficult to imagine a solution, or to argue that this even is a problem that a thing like EA can solve.
At that point, it would be a military issue. Your campaign would no longer be about correcting errors, it would be about enforcing a specific morality upon a population who authentically don’t share it. You could try that. I’m not sure I would want to help. I currently think that I would help, but if it turns out that so much of humanity’s performance of compassion was feigned, I could no longer be confident that I would end up on that side of the border. I’m not even sure that you could be confident that you would remain on that side of the border.
I disagree here. Even though I think it’s more likely than not space factory farming won’t go on forever, it’s not impossible that it will stay, and the chance isn’t like vanishingly low. I wrote a post on it.
Also, for cause prioritization., we need to look at the expected values from the tail scenarios. Even if the chances could be as low as 0.5%, or 0.1%, the huge stake might mean the expected values could still be astronomical, which is what I argue for space factory farming. I think what we need to do is to prove why factory farming will go away in the near/mid future 100%, which I don’t see good arguments for.
For example, there is no proof that cellular agriculture is more energy and resource efficient than all kinds of factory farming. In fact, insect farming, and the raising of certain species of fish, are very efficient. Cellular agriculture also takes a lot of energy to go against entropy. This is especially true if the requirement for the alignment of protein structures is high. In terms of organizing things together against entropy, biological beings are actually quite efficient, and cellular agriculture might have a hard task to outperform all animal protein. There needs to be serious scientific research specifically addressing this issue, before we can claim that cellular agriculture will be more efficient in all possible ways.
On human becoming compassionate. I feel pessimistic about that, because here we are talking about moral circle expansion beyond our own species membership. Within species, whether it be women, people of color, elderly, children, LGBTQ, they all share very similar genes with dominant humans (which generally were white men, in history), neural structures (so that we can be sure that they suffer in similar ways), and we have shared natural languages. All these made it rather easy for dominant humans to understand dominated humans reasonable well. It won’t be the same for our treatment of nonhumans, such as nonhuman animals and digital minds without natural language capabilities.