Compared to what? It seems like an appropriate fraction of EA resources, but a grossly inadequate amount of effort for humanity—like most other EA causes, in my view.
The basic case -- (1) existing investigation of what scientific theories of consciousness imply for AI sentience plausibly suggests that we should expect AI sentience to arrive (via human intention or accidental emergence) in the not-distant future, (2) this seems like a crazy big deal for ~reasons we can discuss~, and (3) almost no-one (inside EA or otherwise) is working on it—rhymes quite nicely with the case for work on AI safety.
Feels to me like it would be easy to overemphasize tractability concerns about this case. Again by analogy to AIS:
Seems hard; no-one has made much progress so far. (To first approximation, no-one has tried!)
SOTA models aren’t similar enough to the things we care about. (Might get decreasingly true although, in any case, seems like we could plausibly better set ourselves up using only dissimilar models.)
But I’m guessing that gesturing at my intuitions here might not be convincing to you. Is there anything you disagree with in the above? If so, what? If not, what am I missing? (Is it just a quantitative disagreement about magnitude of importance or tractability?)
There are people in EA working on it.
True, but an appropriate number given the topic’s importance and neglectedness?
Compared to what? It seems like an appropriate fraction of EA resources, but a grossly inadequate amount of effort for humanity—like most other EA causes, in my view.
Compared to whatever!
The basic case -- (1) existing investigation of what scientific theories of consciousness imply for AI sentience plausibly suggests that we should expect AI sentience to arrive (via human intention or accidental emergence) in the not-distant future, (2) this seems like a crazy big deal for ~reasons we can discuss~, and (3) almost no-one (inside EA or otherwise) is working on it—rhymes quite nicely with the case for work on AI safety.
Feels to me like it would be easy to overemphasize tractability concerns about this case. Again by analogy to AIS:
Seems hard; no-one has made much progress so far. (To first approximation, no-one has tried!)
SOTA models aren’t similar enough to the things we care about. (Might get decreasingly true although, in any case, seems like we could plausibly better set ourselves up using only dissimilar models.)
But I’m guessing that gesturing at my intuitions here might not be convincing to you. Is there anything you disagree with in the above? If so, what? If not, what am I missing? (Is it just a quantitative disagreement about magnitude of importance or tractability?)
One other place doing work that seems relevant/adjacent is the Qualia Research Institute.