An important early question I’ve been thinking about is “Even with aligned AI there might be a narrowing of human society, we need to make sure this is not permanent or ameliorated. How can we do this?” By narrowing of society I mean people interacting with a widely deployed AI being trained to act in a way that the dominant AI does not see as a threat or otherwise select against, e.g. with culture specific morals. Otherwise we might lose some important culture and not be able to get it back due to convergence.
An important early question I’ve been thinking about is “Even with aligned AI there might be a narrowing of human society, we need to make sure this is not permanent or ameliorated. How can we do this?” By narrowing of society I mean people interacting with a widely deployed AI being trained to act in a way that the dominant AI does not see as a threat or otherwise select against, e.g. with culture specific morals. Otherwise we might lose some important culture and not be able to get it back due to convergence.