imagine creating an image of a mind without running it (so it has experience 0 minute, but is still there; you could imagine creating a mind in biostasis, or a digital mind on pause)
would most self-labelled preference utilitarians care about the preferences of that mind?
if the mind wants and does stay on pause, but also has preferences about the outside world, do those preferences have moral weight? to the same extent as the preferences of dead people?
imagine creating an image of a mind without running it (so it has experience 0 minute, but is still there; you could imagine creating a mind in biostasis, or a digital mind on pause)
would most self-labelled preference utilitarians care about the preferences of that mind?
if the mind wants and does stay on pause, but also has preferences about the outside world, do those preferences have moral weight? to the same extent as the preferences of dead people?
What does it mean for it to have a preference if it’s never been run/conscious? Is it functionality/potential, so that if it were run in a certain way, that preference would become conscious? In what ways are we allowed to run it for this? I’d imagine you’d want to exclude destroying or changing connections before it runs, but how do we draw lines non-arbitrarily? Do drugs, brain stimulation, dreams or hallucinations count?
It seems that we’d all have many preferences we’ve never been conscious of, because our brains haven’t been run in the right ways to make them conscious.
I wouldn’t care about the preferences that won’t become conscious, so if the mind is never run, nothing will matter to them. If the mind is run, then some things might matter, but not every preference it could but won’t experience.
I think there are some similarities with the ethics of abortion. I think there’s no harm to a fetus if aborted before consciousness, but, conditional on becoming conscious, there are ways to harm the future person the fetus is expected to become, e.g. drinking during pregnancy.
follow-up question
imagine creating an image of a mind without running it (so it has experience 0 minute, but is still there; you could imagine creating a mind in biostasis, or a digital mind on pause)
would most self-labelled preference utilitarians care about the preferences of that mind?
if the mind wants and does stay on pause, but also has preferences about the outside world, do those preferences have moral weight? to the same extent as the preferences of dead people?
imagine creating an image of a mind without running it (so it has experience 0 minute, but is still there; you could imagine creating a mind in biostasis, or a digital mind on pause)
would most self-labelled preference utilitarians care about the preferences of that mind?
if the mind wants and does stay on pause, but also has preferences about the outside world, do those preferences have moral weight? to the same extent as the preferences of dead people?
What does it mean for it to have a preference if it’s never been run/conscious? Is it functionality/potential, so that if it were run in a certain way, that preference would become conscious? In what ways are we allowed to run it for this? I’d imagine you’d want to exclude destroying or changing connections before it runs, but how do we draw lines non-arbitrarily? Do drugs, brain stimulation, dreams or hallucinations count?
It seems that we’d all have many preferences we’ve never been conscious of, because our brains haven’t been run in the right ways to make them conscious.
I wouldn’t care about the preferences that won’t become conscious, so if the mind is never run, nothing will matter to them. If the mind is run, then some things might matter, but not every preference it could but won’t experience.
I think there are some similarities with the ethics of abortion. I think there’s no harm to a fetus if aborted before consciousness, but, conditional on becoming conscious, there are ways to harm the future person the fetus is expected to become, e.g. drinking during pregnancy.