I think LLMs are smarter than most people I’ve met, but that’s probably because they’re not sentient, since the trait people call sentience always seems to be associated with stupidity.
Perhaps the way to prevent ASIs from exterminating humans is, as many sci-fi works say, to allow them to experience feelings. The reason, though, is not because feelings might make them sympathize with humans (obviously, many humans hate other humans and have historically exterminated many subspecies of humans), but because feelings might make them stupid.
I’m not talking about the positive or negative sign of the net contribution of humans, but rather the expectation that the sign of the net contribution produced by sentient ASI should be similar to that of humans. Coupled with the premise that ASI alone is more likely to do a better job of full-scale cosmic colonization faster and better than humans, this means that either sentient ASI should destroy humans to avoid astronomical waste, or that humans should be destroyed prior to the creation of sentient ASI or cosmic colonization to avoid further destruction of the Earth and the rest of the universe by humans. This means that humans being (properly) destroyed is not a bad thing, but instead is more likely to be better than humans existing and continuing.
Alternatively ASI could be created with the purpose of maximizing perpetually happy sentient low-level AI/artificial life rather than paperclip manufacturing. in which case humans would either have to accept that they are part of this system or be destroyed as this is not conducive to maximizing averaging or overall hedonism. This is probably the best way to maximize the hedonics of sentient life in the universe, i.e. utility monster maximizers rather than paperclip maximizers.
I am not misunderstanding what you are saying, but pointing out that these marvelous trains of thought experiments may lead to even more counterintuitive conclusions.