Thank you for this post! I will make sure to read the 5⁄5 books that I haven’t read yet, especially excited about Joseph Heinrich’s book from 2020, had read The Secret of Our Success before but not that one.
I actually come from an AI Safety interest when it comes to moral progress. The question is to some extent for me on how we can set up AI systems so that they continuously improve “moral progress” as we don’t want to leave our fingerprints on the future.
In my opinion, the larger AI Safety dangers come from “big data hell” like the ones described in Yuah Noah Harari’s Homo Deus or Paul Christiano’s slow take-off scenarios.
Therefore we want to figure out how to set up AIs in such a way that automatically improves moral progress in the structure of their use. I’m also a believer that AI will most likely in the future go through a similar process to the one described in The Secret of Our Success and that we should prepare appropriate optimisation functions for it.
So, if you ever feel like we might die from AI, I would love to see some work in that direction! (happy to talk more about it if you’re up for it.)
Hi Jonas! Henrich’s 2020 book is very ambitious, but I thought it was really interesting. It has lots of insights from various disciplines, attempting to explain why Europe became the dominant superpower from the middle ages (starting to take off around the 13th century) to modernity.
Regarding AI, I think it’s currently beyond the scope of this project. Although I mention AI at some points regarding the future of progress, I don’t develop anything in-depth. So sadly I don’t have any new insights regarding AI alignment.
I do think theories of cultural evolution and processes of mutations and selection of ideas could play a key role in predicting and shaping the long-term future, whether it’s for humans or AI. So I’m excited for some social scientists or computer modellers to try to take this kind of work in a direction applied to making AI values dynamic and evolving (rather than static). But again, it’s currently outside of the scope of my work and area of expertise.
Thank you for this post! I will make sure to read the 5⁄5 books that I haven’t read yet, especially excited about Joseph Heinrich’s book from 2020, had read The Secret of Our Success before but not that one.
I actually come from an AI Safety interest when it comes to moral progress. The question is to some extent for me on how we can set up AI systems so that they continuously improve “moral progress” as we don’t want to leave our fingerprints on the future.
In my opinion, the larger AI Safety dangers come from “big data hell” like the ones described in Yuah Noah Harari’s Homo Deus or Paul Christiano’s slow take-off scenarios.
Therefore we want to figure out how to set up AIs in such a way that automatically improves moral progress in the structure of their use. I’m also a believer that AI will most likely in the future go through a similar process to the one described in The Secret of Our Success and that we should prepare appropriate optimisation functions for it.
So, if you ever feel like we might die from AI, I would love to see some work in that direction!
(happy to talk more about it if you’re up for it.)
Hi Jonas! Henrich’s 2020 book is very ambitious, but I thought it was really interesting. It has lots of insights from various disciplines, attempting to explain why Europe became the dominant superpower from the middle ages (starting to take off around the 13th century) to modernity.
Regarding AI, I think it’s currently beyond the scope of this project. Although I mention AI at some points regarding the future of progress, I don’t develop anything in-depth. So sadly I don’t have any new insights regarding AI alignment.
I do think theories of cultural evolution and processes of mutations and selection of ideas could play a key role in predicting and shaping the long-term future, whether it’s for humans or AI. So I’m excited for some social scientists or computer modellers to try to take this kind of work in a direction applied to making AI values dynamic and evolving (rather than static). But again, it’s currently outside of the scope of my work and area of expertise.