Thanks for writing this! I have been meaning to write something about why I think digital sentience should potentially be prioritized more highly in EA; in lieu of that post, here’s a quick pitch:
One of EA’s comparative advantages seems to have been “taking ideas seriously.” Many of the core ideas from EA came from other fields (economics, philosophy, etc.); the unusual aspect of EA is that we didn’t take invertebrate welfare or Famine, Affluence, and Morality as intellectual thought experiments but instead serious issues.
It seems possible to me that digital welfare work will, by default, exist as an intellectual curiosity. My sample of AI engineers is skewed, but my sense is most of them will be happy to discuss digital sentience for a couple hours, but are unlikely to focus on it heavily.
Going from “that does seem like a potentially big problem, someone should look into that” to “I’m going to look into that” is a thing that EA’s are sometimes good at doing.
On (2): I agree most are unlikely to focus on it heavily, but convincing some people at top labs to care at least slightly seems like it could have a big effect in making sure at least a little animal welfare and digital minds content is included in whatever they train AIs to aim towards. Even a small amount of empathy and open-mindedness for what could be capable of suffering should do a lot for the risk of astronomical suffering.
Thanks for writing this! I have been meaning to write something about why I think digital sentience should potentially be prioritized more highly in EA; in lieu of that post, here’s a quick pitch:
One of EA’s comparative advantages seems to have been “taking ideas seriously.” Many of the core ideas from EA came from other fields (economics, philosophy, etc.); the unusual aspect of EA is that we didn’t take invertebrate welfare or Famine, Affluence, and Morality as intellectual thought experiments but instead serious issues.
It seems possible to me that digital welfare work will, by default, exist as an intellectual curiosity. My sample of AI engineers is skewed, but my sense is most of them will be happy to discuss digital sentience for a couple hours, but are unlikely to focus on it heavily.
Going from “that does seem like a potentially big problem, someone should look into that” to “I’m going to look into that” is a thing that EA’s are sometimes good at doing.
On (2): I agree most are unlikely to focus on it heavily, but convincing some people at top labs to care at least slightly seems like it could have a big effect in making sure at least a little animal welfare and digital minds content is included in whatever they train AIs to aim towards. Even a small amount of empathy and open-mindedness for what could be capable of suffering should do a lot for the risk of astronomical suffering.