How could they not be conscious?
It is rare for theories of consciousness to make any demands on motivational structure.
Global workspace theory, for instance, says that consciousness depends on having a central repository by which different cognitive modules talk to each other. If the modules were to directly communicate point to point, there would be no conscious experiences (by that theory). I see no reason in that case why decision making would have to rely on different mechanisms.
Higher order theories suggest that consciousness depends on having representations of our own mental states. A creature could have all sorts of direct concerns that it never reflected on, and these could look a lot like ours.
IIT suggests that you could have a high level duplicate of a conscious system that was unconscious due to the fine grained details.
Etc.
The specific things you need to change in the robots to render them not conscious depends on your theory, but I don’t think you need to go quite so far as to make them a lookup table or an transformer.
My impression was that you like theories that stress the mechanisms behind our judgments of the weirdness of consciousness as critical to conscious experiences. I could imagine a robot just like us but totally non-introspective, lacking phenomenal concepts, etc. Would you think such a thing was conscious? Could it not desire things in something like the way we do?
There’s another question about whether I’d actually dissect one, and maybe I still wouldn’t, but this could be for indirect or emotional reasons. It could still be very unpleasant or even traumatic for me to dissect something that cries out and against the desperate pleas of its mother. Or, it could be bad to become less sensitive to such responses, when such responses often are good indicators of risk of morally significant harm. People who were confident nonhuman animals don’t matter in themselves sometimes condemned animal cruelty for similar reasons.
This supports my main argument. If you value conscious experience these emotional reasons could be concerning for the long term future. It seems like a slippery slope from being nice to them because we find it more pleasant to thinking that they are moral patients, particularly if we frequently interact with them. It is possible that our generation will never stop caring about consciousness, but if we’re not careful, our children might.
I stick by my intuition, but it is really just an intuition about how human behavior. Perhaps some people would be completely unbothered in that situation. Perhaps most would. (I actually find that itself worrisome in a different way, because it suggests that people may easily overlook AI wellbeing. Perhaps you have the right reasons for happily ignoring their anguished cries, but not everyone will.) This is an empirical question, really, and I don’t think we’ll know how people will react until it happens.