I wrote a post about my concerns with longtermism. (an 8 minute read) I had several concerns, but I suppose the most important was about the difference between:
avoiding harm to potential future beings
helping potential future beings
I will read your books, when I can, to get more depth of understanding of your point of view. Broadly speaking, I think that Longtermists should self-efface.
In my earlier post, I wrote:
Longtermists should self-efface
But I have a final concern, and a hope that longtermism will self-efface to address it. In particular, I hope that longtermists will presume that creation of some utility in the experience of a being with moral status, when accomplished through control of that being in context, will contain errors, one or more of:
errors in longtermist accounts of the experience caused for the being.
errors in longtermist beliefs about the control achieved over the being.
errors in longtermist recognition of the moral status of the being.
Total control, as a goal, will suffer at least those three types of error, the least obvious error is the last.
In general, I think that as efforts toward ongoing control over another being increases, treatment of them as if they have moral status decreases, as their presence (or imagination) in your life shifts from one of intrinsic value to one of instrumental value only.
However, more common in experience is that the errors are either of types 1 and 2, and that errors of type 3 occur not because increasing control is actually established, but because frustration mounts as type 1 and 2 errors are acknowledged.
I have doubts over whether you find these concerns relevant, but I haven’t read your book yet! :)
I wrote:
I doubt whether longtermists will self-efface in this way, but hopefully they will acknowledge that the life of a person that will never be conceived has no moral status. That acknowledgement will let them avoid some obvious errors in their moral calculations.
In your summaries of the utter basics of longtermism:
future people have moral status
there can be a lot of future people
we can make their lives better
you mention existential risk. So by future people, you must mean possible future people only. I will read your book to learn more about your views of the moral status of:
the human species
the event (or act) of conception
a person never conceived (in the past or future)
Actually, you can find ideas of the moral status of a person never conceived in:
religious views about the spirit
a monotonic model of human existence inside a solipsistic model of human conception
Grief over lost opportunities to create future people seems to be an alternative to solipsistic models of human conception. I will defend imagination and its role in creating goals but solipsism about never-existent people seems less preferable to grief.
Emotions do play a role in deciding values, and their unconscious nature makes them stabilizing influences in the ongoing presence of those values, but they can be aversive. Therefore, emotions are not a feature of all plans for future experiences. In particular, the emphasis within EA culture on:
virtual people
aligned (enslaved) AI superbeings
indefinitely long life extension
rationality
removal of all suffering from life
futures of trillions of people
suggest that a combination of intellectual dishonesty and technological determinism feeds a current of EA ideas. The current runs contrary to lessons evident in our current overshoot and the more general understandings that:
aversive emotions and experience are present
grief, disgust, or boredom (or other feelings) about our fellow humans is inevitable but not a source of truth
there are limits to our capabilities as rational, perceptive, and wise moral agents
One clear path forward is a far-future of fewer people, with lower birth rates and a declining global population intentionally accomplished over a few hundred years. A virtue of a smaller future for humanity is an assurance of it keeping resources available for its successors. Another virtue is that it reduces the temptation to try and control humanity. A smaller population has less potential to harm itself or others.
As I wrote, I want longtermists to self-efface. To seek control over others is inevitable in a longtermist scheme. The delay between action and intended consequence in a longtermist scheme leaves opportunities for denial of worthwhile goals in favor of intellectually dishonest or even solipsistic versions of them.
Hi Will,
I wrote a post about my concerns with longtermism. (an 8 minute read) I had several concerns, but I suppose the most important was about the difference between:
avoiding harm to potential future beings
helping potential future beings
I will read your books, when I can, to get more depth of understanding of your point of view. Broadly speaking, I think that Longtermists should self-efface.
In my earlier post, I wrote:
In general, I think that as efforts toward ongoing control over another being increases, treatment of them as if they have moral status decreases, as their presence (or imagination) in your life shifts from one of intrinsic value to one of instrumental value only.
However, more common in experience is that the errors are either of types 1 and 2, and that errors of type 3 occur not because increasing control is actually established, but because frustration mounts as type 1 and 2 errors are acknowledged.
I have doubts over whether you find these concerns relevant, but I haven’t read your book yet! :)
I wrote:
In your summaries of the utter basics of longtermism:
you mention existential risk. So by future people, you must mean possible future people only. I will read your book to learn more about your views of the moral status of:
the human species
the event (or act) of conception
a person never conceived (in the past or future)
Actually, you can find ideas of the moral status of a person never conceived in:
religious views about the spirit
a monotonic model of human existence inside a solipsistic model of human conception
Grief over lost opportunities to create future people seems to be an alternative to solipsistic models of human conception. I will defend imagination and its role in creating goals but solipsism about never-existent people seems less preferable to grief.
Emotions do play a role in deciding values, and their unconscious nature makes them stabilizing influences in the ongoing presence of those values, but they can be aversive. Therefore, emotions are not a feature of all plans for future experiences. In particular, the emphasis within EA culture on:
virtual people
aligned (enslaved) AI superbeings
indefinitely long life extension
rationality
removal of all suffering from life
futures of trillions of people
suggest that a combination of intellectual dishonesty and technological determinism feeds a current of EA ideas. The current runs contrary to lessons evident in our current overshoot and the more general understandings that:
aversive emotions and experience are present
grief, disgust, or boredom (or other feelings) about our fellow humans is inevitable but not a source of truth
there are limits to our capabilities as rational, perceptive, and wise moral agents
One clear path forward is a far-future of fewer people, with lower birth rates and a declining global population intentionally accomplished over a few hundred years. A virtue of a smaller future for humanity is an assurance of it keeping resources available for its successors. Another virtue is that it reduces the temptation to try and control humanity. A smaller population has less potential to harm itself or others.
As I wrote, I want longtermists to self-efface. To seek control over others is inevitable in a longtermist scheme. The delay between action and intended consequence in a longtermist scheme leaves opportunities for denial of worthwhile goals in favor of intellectually dishonest or even solipsistic versions of them.