[Question] How come there isn’t that much focus in EA on research into whether /​ when AI’s are likely to be sentient?

As far as I know, there isn’t that much funding or research in EA on AI sentience (though there is some? e.g. this)

I can imagine some answers:

  • Very intractable

  • Alignment is more immediately the core challenge, and widening the focus isn’t useful

  • Funders have a working view that additional research is unlikely to affect (e.g. that AIs will eventually be sentient?)

  • Longtermist focus is on AI as an X-risk, and the main framing there is on avoiding humans being wiped out

But it also seems important and action-relevant:

  • Current framing of AI safety is about aligning with humanity, but making AI go well for AI’s could be comparably /​ more important

  • Naively, if we knew AIs would be sentient, it might make ‘prioritising AIs welfare in AI development’ a much higher impact focus area

  • It’s an example of an area that won’t necessarily attract resources /​ attention from commercial sources

(I’m not at all familiar with the area of AI sentience and posted without much googling, so please excuse any naivety in the question!)