In this book you interview lots of people in AI. As somewhat of an outsider, do you have any views on things you think the AI safety community is wrong about, ideas that seem far-fetched, or things they should be focusing more on?
Thanks for your questions. I worry that even if AI ends up being safe in an X-risk sort of way, it might neverthless create a world we wouldn’t want to bring into existence. I think recommender algorithms are a good example of this: they are already extractive, and it’s easy to imagine that more powerful AI will intensify such dynamics. I would like to see more attention being given to AI that is not only “helpful, honest and harmless”, but conducive to fulfilment. What that AI looks like, I have no idea, and I don’t want to invoke paternalism and nannying. But I’ll give a hat-tip to my friends at the Meaning Alignment Institute, who have done a lot of work on this.
In this book you interview lots of people in AI. As somewhat of an outsider, do you have any views on things you think the AI safety community is wrong about, ideas that seem far-fetched, or things they should be focusing more on?
Thanks for your questions. I worry that even if AI ends up being safe in an X-risk sort of way, it might neverthless create a world we wouldn’t want to bring into existence. I think recommender algorithms are a good example of this: they are already extractive, and it’s easy to imagine that more powerful AI will intensify such dynamics. I would like to see more attention being given to AI that is not only “helpful, honest and harmless”, but conducive to fulfilment. What that AI looks like, I have no idea, and I don’t want to invoke paternalism and nannying. But I’ll give a hat-tip to my friends at the Meaning Alignment Institute, who have done a lot of work on this.