Please stop saying that mind-space is an “enormously broad space.” What does that even mean? How have you established a measure on mind-space that isn’t totally arbitrary?
Why don’t you make the positive case for the space of possible (or, if you wish, likely) minds being minds which have values compatible with the fulfillment of human values? I think we have pretty strong evidence that not all minds are like this even within the space of minds produced by evolution.
What if concepts and values are convergent when trained on similar data, just like we see convergent evolution in biology?
Concepts do seem to be convergent to some degree (though note that ontological shifts at increasing levels of intelligence seem likely), but I do in fact think that evidence from evolution suggests that values are strongly contingent on the kinds of selection pressures which produced various species.
The positive case is just super obvious, it’s that we’re trying very hard to make these systems aligned, and almost all the data we’re dumping into these systems is generated by humans and is therefore dripping with human values and concepts.
I also think we have strong evidence from ML research that ANN generalization is due to symmetries in the parameter-function map which seem generic enough that they would apply mutatis mutandis to human brains, which also have a singular parameter-function map (see e.g. here).
I do in fact think that evidence from evolution suggests that values are strongly contingent on the kinds of selection pressures which produced various species.
Not really sure what you’re getting at here/why this is supposed to help your side
evidence from evolution suggests that values are strongly contingent on the kinds of selection pressures which produced various species
The fact that natural selection produced species with different goals/values/whatever isn’t evidence that that’s the only way to get those values, because “selection pressure” isn’t a mechanistic explanation. You need more info about how values are actually implemented to rule out that a proposed alternative route to natural selection succeeds in reproducing them.
The fact that natural selection produced species with different goals/values/whatever isn’t evidence that that’s the only way to get those values, because “selection pressure” isn’t a mechanistic explanation. You need more info about how values are actually implemented to rule out that a proposed alternative route to natural selection succeeds in reproducing them.
I’m not claiming that evolution is the only way to get those values, merely that there’s no reason to expect you’ll get them by default by a totally different mechanism. The fact that we don’t have a good understanding of how values form even in the biological domain is a reason for pessimism, not optimism.
The point I was trying to make is that natural selection isn’t a “mechanism” in the right sense at all. it’s a causal/historical explanation not an account of how values are implemented. What is the evidence from evolution? The fact that species with different natural histories end up with different values really doesn’t tell us much without a discussion of mechanisms. We need to know 1) how different are the mechanisms actually used to point biological and artificial cognitive systems toward ends and 2) how many possible mechanisms to do so are there.
The fact that we don’t have a good understanding of how values form even in the biological domain is a reason for pessimism, not optimism.
One reason for pessimism would be that human value learning has too many messy details. But LLMs are already better behaved than anything in the animal kingdom besides humans and are pretty good at intuitively following instructions, so there is not much evidence for this problem. If you think they are not so brainlike, then this is evidence that not-so-brainlike mechanisms work. And there are also theories that value learning in current AI works roughly similarly to value learning in the brain.
Which is just to say I don’t see the prior for pessimism, just from looking at evolution.
Why don’t you make the positive case for the space of possible (or, if you wish, likely) minds being minds which have values compatible with the fulfillment of human values? I think we have pretty strong evidence that not all minds are like this even within the space of minds produced by evolution.
Concepts do seem to be convergent to some degree (though note that ontological shifts at increasing levels of intelligence seem likely), but I do in fact think that evidence from evolution suggests that values are strongly contingent on the kinds of selection pressures which produced various species.
The positive case is just super obvious, it’s that we’re trying very hard to make these systems aligned, and almost all the data we’re dumping into these systems is generated by humans and is therefore dripping with human values and concepts.
I also think we have strong evidence from ML research that ANN generalization is due to symmetries in the parameter-function map which seem generic enough that they would apply mutatis mutandis to human brains, which also have a singular parameter-function map (see e.g. here).
Not really sure what you’re getting at here/why this is supposed to help your side
what you mean by this? (compare “we don’t know how to prevent an ontological collapse, where meaning structures constructed under one world-model compile to something different under a different world model”. Is this the same thing?). Is there a good writeup anywhere of why we should expect this to happen? This seems speculative and unlikely to me
The fact that natural selection produced species with different goals/values/whatever isn’t evidence that that’s the only way to get those values, because “selection pressure” isn’t a mechanistic explanation. You need more info about how values are actually implemented to rule out that a proposed alternative route to natural selection succeeds in reproducing them.
Re: ontological shifts, see this arbital page: https://arbital.com/p/ontology_identification.
I’m not claiming that evolution is the only way to get those values, merely that there’s no reason to expect you’ll get them by default by a totally different mechanism. The fact that we don’t have a good understanding of how values form even in the biological domain is a reason for pessimism, not optimism.
The point I was trying to make is that natural selection isn’t a “mechanism” in the right sense at all. it’s a causal/historical explanation not an account of how values are implemented. What is the evidence from evolution? The fact that species with different natural histories end up with different values really doesn’t tell us much without a discussion of mechanisms. We need to know 1) how different are the mechanisms actually used to point biological and artificial cognitive systems toward ends and 2) how many possible mechanisms to do so are there.
One reason for pessimism would be that human value learning has too many messy details. But LLMs are already better behaved than anything in the animal kingdom besides humans and are pretty good at intuitively following instructions, so there is not much evidence for this problem. If you think they are not so brainlike, then this is evidence that not-so-brainlike mechanisms work. And there are also theories that value learning in current AI works roughly similarly to value learning in the brain.
Which is just to say I don’t see the prior for pessimism, just from looking at evolution.