The transition from “good” to “wellbeing” seems rather innocent, but it opens the way to rather popular line of reasoning: that we should care only about the number of happy observer-moments, without caring whose are these moments. Extrapolating, we stop caring about real humans, but start caring about possible animals. In other words, it opens the way to pure utilitarian-open-individualist bonanza, where value of human life and individuality are lost and badness of death is ignored. The last point is most important for me, as I view irreversible mortality as the main human problem.
To be totally honest, this really gives off vibes of “I personally don’t want to die and I therefore don’t like moral reasoning that even entertains the idea that humans (me) may not be the only thing we should care about.” Gee, what a terrible world it might be if we “start caring about possible animals”!
Of course, that’s probably not what you’re actually/consciously arguing, but the vibes are still there. It particularly feels like motivated reasoning when you gesture to abstract, weakly-defined concepts like the “value of human life and individuality” and imply they should supersede concepts like wellbeing, which, when properly defined and when approaching questions from a utilitarian framework, should arguably subsume everything morally relevant.
You seem to dispute the (fundamental concept? application?) of utilitarianism for a variety of reasons—some of which (e.g., your very first example regarding the fog of distance) I see as reflecting a remarkably shallow/motivated (mis)understanding of utilitarianism, to be honest. (For example, the fog case seems to not understand that utilitarian decision-making/analysis is compatible with decision-making under uncertainty.)
I need to clarify my views: I want to save humans first, and after that save all animals, from closest to humans to more remote. By “saving” I mean resurrection of the dead, of course. I am pro resurrection of mammoth and I am for cryonics for pets. Such framework will eventually save everyone, so in infinity it converges with other approaches to saving animals.
But “saving humans first” gives us a leverage, because we will have more powerful civilisation which will have higher capacity to do more good. If humans will extinct now, animals will eventually extinct too when Sun will become a little brighter, around 600 mln. years from now.
But the claim that I want to save only my life is factually false.
I’m afraid you’ve totally lost me at this point. Saving mammoths?? Why??
And are you seriously suggesting that we can resurrect dead people whose brains have completely decayed? What?
And what is this about saving humans first? No, we don’t have to save every human first, we theoretically only need to save enough so that the process of (whatever you’re trying to accomplish?) can continue. If we are strictly welfare-maximizing without arbitrary speciesism, it may mean prioritizing saving some of the existing animals over every human currently (although this may be unlikely).
To be clear, I certainly understand that you aren’t saying you only care about saving your own life, but the post gives off those kinds of vibes nonetheless.
Unless you’re collecting data for an EA forum simulator (not IRB approved) I would consider disengaging in some situations. Some posts probably aren’t going to first place as a red team prize.
What if we can develop future technology to read all the vibrations emanated from the earth from all of human history...the earliest ones will be farther out, the most recent ones near...then we can filter through them and recreate everything that ever happened on earth, effectively watching what happened in the past...and maybe even to the level of brain waves of each human, thus we could resurrect all previously dead humans by gathering their brain waves and everything they ever said...presumably once re-animated they could gain memory of things missed and reconstruct themselves further. Of course we could do this with all extinct animals too.
This really becomes a new version of heaven. For the religious; what if this was G-d’s plan, not to give us a heaven but for us to create one with the minds we have (or have been given) this being the resurrection...maybe G-d’s not egoistic and doesn’t care if we acknowledge the originating gift meaning atheism is just fine. We do know love doesn’t seek self benefit so that would fit well since “G-d is love”. I like being both religious and atheist at the same time, which I am.
I would like to thank turchin the author for inspiring this idea in me for it is truly blowing my mind. Please let me know of other writings on this.
To be totally honest, this really gives off vibes of “I personally don’t want to die and I therefore don’t like moral reasoning that even entertains the idea that humans (me) may not be the only thing we should care about.” Gee, what a terrible world it might be if we “start caring about possible animals”!
Of course, that’s probably not what you’re actually/consciously arguing, but the vibes are still there. It particularly feels like motivated reasoning when you gesture to abstract, weakly-defined concepts like the “value of human life and individuality” and imply they should supersede concepts like wellbeing, which, when properly defined and when approaching questions from a utilitarian framework, should arguably subsume everything morally relevant.
You seem to dispute the (fundamental concept? application?) of utilitarianism for a variety of reasons—some of which (e.g., your very first example regarding the fog of distance) I see as reflecting a remarkably shallow/motivated (mis)understanding of utilitarianism, to be honest. (For example, the fog case seems to not understand that utilitarian decision-making/analysis is compatible with decision-making under uncertainty.)
If you’d like to make a more compelling criticism that stems from rebuffing utilitarianism, I would strongly learning more about the framework from people who at least decently understand and promote/use the concept, such as here: https://www.utilitarianism.net/objections-to-utilitarianism#general-ways-of-responding-to-objections-to-utiliarianism
I need to clarify my views: I want to save humans first, and after that save all animals, from closest to humans to more remote. By “saving” I mean resurrection of the dead, of course. I am pro resurrection of mammoth and I am for cryonics for pets. Such framework will eventually save everyone, so in infinity it converges with other approaches to saving animals.
But “saving humans first” gives us a leverage, because we will have more powerful civilisation which will have higher capacity to do more good. If humans will extinct now, animals will eventually extinct too when Sun will become a little brighter, around 600 mln. years from now.
But the claim that I want to save only my life is factually false.
I’m afraid you’ve totally lost me at this point. Saving mammoths?? Why??
And are you seriously suggesting that we can resurrect dead people whose brains have completely decayed? What?
And what is this about saving humans first? No, we don’t have to save every human first, we theoretically only need to save enough so that the process of (whatever you’re trying to accomplish?) can continue. If we are strictly welfare-maximizing without arbitrary speciesism, it may mean prioritizing saving some of the existing animals over every human currently (although this may be unlikely).
To be clear, I certainly understand that you aren’t saying you only care about saving your own life, but the post gives off those kinds of vibes nonetheless.
Unless you’re collecting data for an EA forum simulator (not IRB approved) I would consider disengaging in some situations. Some posts probably aren’t going to first place as a red team prize.
I am serious about resurrection of the dead, there are several ways, including running the simulation of the whole history of mankind and filling the knowledge gaps with random noise which, thanks to Everett, will be correct in one of the branches. I explained this idea in longer article: You Only Live Twice: A Computer Simulation of the Past Could be Used for Technological Resurrection
What if we can develop future technology to read all the vibrations emanated from the earth from all of human history...the earliest ones will be farther out, the most recent ones near...then we can filter through them and recreate everything that ever happened on earth, effectively watching what happened in the past...and maybe even to the level of brain waves of each human, thus we could resurrect all previously dead humans by gathering their brain waves and everything they ever said...presumably once re-animated they could gain memory of things missed and reconstruct themselves further. Of course we could do this with all extinct animals too.
This really becomes a new version of heaven. For the religious; what if this was G-d’s plan, not to give us a heaven but for us to create one with the minds we have (or have been given) this being the resurrection...maybe G-d’s not egoistic and doesn’t care if we acknowledge the originating gift meaning atheism is just fine. We do know love doesn’t seek self benefit so that would fit well since “G-d is love”. I like being both religious and atheist at the same time, which I am.
I would like to thank turchin the author for inspiring this idea in me for it is truly blowing my mind. Please let me know of other writings on this.
I wrote two articles about resurrection: You Only Live Twice: A Computer Simulation of the Past Could be Used for Technological Resurrection
and
Classification of Approaches to Technological Resurrection