Surely the information transferred from natural selection to the brain must be a fraction of the information in the genome. Which is much less: https://en.m.wikipedia.org/wiki/Human_genome#Information_content The organism, including the brain, seems to be roughly a decompressed genome. And actually the environment can provide a lot of information through the senses. We can’t memorize the Wikipedia, but that may be because we are not optimized for storing plain text efficiently. We still can recall quite a bit of visual and auditory information.
Kaspar Brandner
Well, I don’t know how serial RL algorithms are, but even highly parallel animals can be interpreted as doing some sort of RL—“operant conditioning” is the term from psychology.
I agree that brain emulation is unlikely to happen. The analogy with the brain does not mean we have to emulate it very closely. Artificial neural networks are already highly successful without a close correspondence to actual neural networks.
Inference stage—aren’t we obviously both at inference and training stage at the same time, unlike current ML models? We can clearly learn things everyday, and we only use our very parallel wetware. The way we got brains, through natural selection, is indeed a different matter, but I would not necessarily label this the training stage. Clearly some information is hardwired from the evolutionary process, but this is only a small fraction of what a human brain does in fact learn.
And okay, so has not been proven, but it is clearly well-supported by the available evidence.
Regarding parallelism and Amdahl’s Law: I don’t think this is a particular issue for AI progress. Biological brains are themselves extremely parallel, far more so than any processors we use today, and we still have general intelligence in brains but not in computers. If anything, the fact that computers are more serial than brains gives the former an advantage, since algorithms which run well in parallel can be easily “serialized”. It is only the other direction which is potentially very inefficient, since some (many?) algorithms are very slow in parallel. In case of neural networks, parallelism only has an advantage in terms of energy requirements. But AI seems not substantially energy bottlenecked, in contrast to biological organisms.
Can this also be classified along these criteria by Kaj Sotala?
Yeah. The question is whether these intuitions are still covered by something we may call: preference utilitarianism.
Then why is it better, according to preference utilitarianism, not to have a preference for monuments than not to have a preference for eating properly? (Not having one of them resolves the conflict after all.)
Yeah, preferences may still be latent dispositions in case of unconsciousness, but the same seems plausible for Parfit’s forgotten stranger. If he is reminded of them, his preference may come back. So the two cases don’t seem very different.
Is it more unfair because they aren’t informed? I think it’s already unfair if they are informed. I think this only seems worse if you assume the conclusion that if you never find out, it shouldn’t matter.
Well, it is presumably less unfair if they are informed, because it would make them happy to learn that the person is cured, which matters, at least somewhat. And yes, my (and Parfit’s) intuition is that if they never find out that the person was cured, this would not be good for the carers. So curing the cared-about person would not be better than curing the person about whom no one else cares. That’s not a conclusion, it’s a more a premiss for those who share this intuition.
But the example assumes the person actually wants to build the monument more strongly than they want to eat. If we admit that some desires matter more than others, even if they are weaker, we seem to be giving up preference utilitarianism.
Us not wanting people to do things with our body without our knowledge is indeed a different argument, one which seems to show that at least some preferences matter ethically. But preference utilitarianism is usually the view that only preferences matter, perhaps even all preferences.
Regarding Parfit’s case, is this not the same as me being unconscious while my body is manipulated? In both cases we do not seem to currently hold a preference. In one because he forgot about it, in the other because I’m unconscious.
But even suppose Parfit did not forget about the stranger. Why would it be good for Parfit that the stranger is cured, without his knowledge? To me it does not seem to be good for him. And wouldn’t such a view have the unfair consequence that it is much less important to cure a lonely person about whom no other people care than a popular person about whom lots of people care, even if those are not informed about the cured illness?
Yes exactly.
But that seems to be begging the question? The empirical question is whether or not all/most differences are caused by environmental factors.