Thanks for writing this! I have been meaning to write something about why I think digital sentience should potentially be prioritized more highly in EA; in lieu of that post, hereās a quick pitch:
One of EAās comparative advantages seems to have been ātaking ideas seriously.ā Many of the core ideas from EA came from other fields (economics, philosophy, etc.); the unusual aspect of EA is that we didnāt take invertebrate welfare or Famine, Affluence, and Morality as intellectual thought experiments but instead serious issues.
It seems possible to me that digital welfare work will, by default, exist as an intellectual curiosity. My sample of AI engineers is skewed, but my sense is most of them will be happy to discuss digital sentience for a couple hours, but are unlikely to focus on it heavily.
Going from āthat does seem like a potentially big problem, someone should look into thatā to āIām going to look into thatā is a thing that EAās are sometimes good at doing.
On (2): I agree most are unlikely to focus on it heavily, but convincing some people at top labs to care at least slightly seems like it could have a big effect in making sure at least a little animal welfare and digital minds content is included in whatever they train AIs to aim towards. Even a small amount of empathy and open-mindedness for what could be capable of suffering should do a lot for the risk of astronomical suffering.
Thanks for writing this! I have been meaning to write something about why I think digital sentience should potentially be prioritized more highly in EA; in lieu of that post, hereās a quick pitch:
One of EAās comparative advantages seems to have been ātaking ideas seriously.ā Many of the core ideas from EA came from other fields (economics, philosophy, etc.); the unusual aspect of EA is that we didnāt take invertebrate welfare or Famine, Affluence, and Morality as intellectual thought experiments but instead serious issues.
It seems possible to me that digital welfare work will, by default, exist as an intellectual curiosity. My sample of AI engineers is skewed, but my sense is most of them will be happy to discuss digital sentience for a couple hours, but are unlikely to focus on it heavily.
Going from āthat does seem like a potentially big problem, someone should look into thatā to āIām going to look into thatā is a thing that EAās are sometimes good at doing.
On (2): I agree most are unlikely to focus on it heavily, but convincing some people at top labs to care at least slightly seems like it could have a big effect in making sure at least a little animal welfare and digital minds content is included in whatever they train AIs to aim towards. Even a small amount of empathy and open-mindedness for what could be capable of suffering should do a lot for the risk of astronomical suffering.