Explaining AI x-risk directly will excite about 20% of people and freak out the other 80%. Which is fine if you want to be a public intellectual, or chat to people within EA, but not fine for interacting with most family/friends, moving about in academia etc. The standard approach for the latter is to say you’re working on researching safe and fair AI, where shorter term risks, and longer term catastrophes are particular examples.
Explaining AI x-risk directly will excite about 20% of people and freak out the other 80%. Which is fine if you want to be a public intellectual, or chat to people within EA, but not fine for interacting with most family/friends, moving about in academia etc. The standard approach for the latter is to say you’re working on researching safe and fair AI, where shorter term risks, and longer term catastrophes are particular examples.