(this is partially echoing/paraphrasing lukeprog) I want to emphasize the anthropic measure/phenomenology (never mind, this can be put much more straightforwardly) observer count angle, which to me seems like the simplest way neuron count would lead to increased moral valence. You kind of mention it, and it’s discussed more in the full document, but for most of the post it’s ignored.
Imagine a room where a pair of robots are interviewed. The robot interviewer is about to leave and go home for the day, they’re going to have to decide whether to leave the light on or off. They know that one of the robots hates the dark, but the other strongly prefers it. The robot who prefers the dark also happens to be running on 1000 redundant server instances having their outputs majority-voted together to maximize determinism and repeatability of experiments or something. The robot who prefers the light happens to be running on just one server.
The dark-prefering robot doesn’t even know about its redundancy, it doesn’t lead it to report any more intensity of experience. There is no report, but it’s obvious that the dark-preferring robot is having its experience magnified by a thousand times, because it is exactly as if there are a thousand of them, having that same experience of being in a lit room, even though they don’t know about each other.
You turn the light off before you go.
Making some assumptions about how the brain distributes the processing of suffering, which we’re not completely sure of, but which seem more likely than not, we should have some expectation that neuron count has the same anthropic boosting effect.
(this is partially echoing/paraphrasing lukeprog) I want to emphasize the
anthropic measure/phenomenology(never mind, this can be put much more straightforwardly) observer count angle, which to me seems like the simplest way neuron count would lead to increased moral valence. You kind of mention it, and it’s discussed more in the full document, but for most of the post it’s ignored.Imagine a room where a pair of robots are interviewed. The robot interviewer is about to leave and go home for the day, they’re going to have to decide whether to leave the light on or off. They know that one of the robots hates the dark, but the other strongly prefers it.
The robot who prefers the dark also happens to be running on 1000 redundant server instances having their outputs majority-voted together to maximize determinism and repeatability of experiments or something. The robot who prefers the light happens to be running on just one server.
The dark-prefering robot doesn’t even know about its redundancy, it doesn’t lead it to report any more intensity of experience. There is no report, but it’s obvious that the dark-preferring robot is having its experience magnified by a thousand times, because it is exactly as if there are a thousand of them, having that same experience of being in a lit room, even though they don’t know about each other.
You turn the light off before you go.
Making some assumptions about how the brain distributes the processing of suffering, which we’re not completely sure of, but which seem more likely than not, we should have some expectation that neuron count has the same anthropic boosting effect.