Thanks, lots of interesting articles in this list that I missed despite my interest in this area.
One suggestion I have is to add some studies of failed attempts at building/reforming institutions, otherwise one might get a skewed view of the topic. (Unfortunately I don’t have specific readings to suggest.)
A related topic you don’t mention here (maybe due to lack of writings on it?) is maybe humanity should pause AI development and have a long (or even short!) reflection about what it wants to do next, e.g. resume AI development or do something else like subsidize intelligence enhancement (e.g. embryo selection) for everyone who wants it so more people can meaningfully participate in deciding the fate of our world. (I note that many topics on this reading list are impossible for most humans to fully understand, perhaps even with AI assistance.)
I claim that this area outscores regular AI safety on importance while being significantly more neglected
This neglect is itself perhaps one of the most important puzzles of our time. With AGI very plausibly just a few years away, why aren’t more people throwing money or time/effort at this cluster of problems just out of self interest? Why isn’t there more intellectual/academic interest in these topics, many of which seem so intrinsically interesting to me?
This neglect is itself perhaps one of the most important puzzles of our time. With AGI very plausibly just a few years away, why aren’t more people throwing money or time/effort at this cluster of problems just out of self interest? Why isn’t there more intellectual/academic interest in these topics, many of which seem so intrinsically interesting to me?
I think all of:
Many people seem to believe in something like “AI will be a big deal, but the singularity is much further off (or will never happen)”.
People treat the singularity in far mode even if they admit belief.
Previously commited people (especially academics) don’t shift their interests or research areas much based on events in the world, though they do rebrand their prior interests. It requires new people entering fields to actually latch onto new areas and there hasn’t been enough time for this.
People who approach these topics from an altruistic perspective often come away with the view “probably we can mostly let the AIs/future figure this out, other topics seems more pressing and more possible to make progress on.
Thanks, lots of interesting articles in this list that I missed despite my interest in this area.
One suggestion I have is to add some studies of failed attempts at building/reforming institutions, otherwise one might get a skewed view of the topic. (Unfortunately I don’t have specific readings to suggest.)
A related topic you don’t mention here (maybe due to lack of writings on it?) is maybe humanity should pause AI development and have a long (or even short!) reflection about what it wants to do next, e.g. resume AI development or do something else like subsidize intelligence enhancement (e.g. embryo selection) for everyone who wants it so more people can meaningfully participate in deciding the fate of our world. (I note that many topics on this reading list are impossible for most humans to fully understand, perhaps even with AI assistance.)
This neglect is itself perhaps one of the most important puzzles of our time. With AGI very plausibly just a few years away, why aren’t more people throwing money or time/effort at this cluster of problems just out of self interest? Why isn’t there more intellectual/academic interest in these topics, many of which seem so intrinsically interesting to me?
I think all of:
Many people seem to believe in something like “AI will be a big deal, but the singularity is much further off (or will never happen)”.
People treat the singularity in far mode even if they admit belief.
Previously commited people (especially academics) don’t shift their interests or research areas much based on events in the world, though they do rebrand their prior interests. It requires new people entering fields to actually latch onto new areas and there hasn’t been enough time for this.
People who approach these topics from an altruistic perspective often come away with the view “probably we can mostly let the AIs/future figure this out, other topics seems more pressing and more possible to make progress on.
There aren’t clear shovel ready projects.