Thank you so much for this comment, Johan! It is really insightful. I agree that working with our evolutionary tendencies, instead of against them, would be the best option. The hard problem, as you mentioned, is how do we do that?
(I’ll give the chapter a read today—if my power manages to stay on! [there’s a Nor’easter hitting where I live]).
I think what has changed since 2010 has been general awareness of transcending human limits as a realistic possibility. Outside of ML researchers, MIRI and the rationality community, who back then considered AGI reshaping society in our lifetimes a serious possibility?
There is a very real psychological difference between the way the average human sees “sci-fi risks” (alien invasion, asteroids, Cthulhu rising) vs. realistic ones (war, poverty, recession, climate change). In 2010 AI was a sci-fi risk, in 2024 it is a realistic one. Most humans are still struggling with that transition, and we are getting technically closer to reaching AGI. This is extremely dangerous.
I hope you are okay with the storm! Good luck there. And indeed, figuring out how to work with ones evolutionary tendencies is not always straightforward. For many personal decisions this is easier, such as recognising that sitting 10 hours a day at the desk is not what our bodies have evolved for. “So let’s go for a run!” If it comes to large scale coordination, however, things get trickier...
”I think what has changed since 2010 has been general awareness of transcending human limits as a realistic possibility.” → I agree with this and your following points.
Thank you so much for this comment, Johan! It is really insightful. I agree that working with our evolutionary tendencies, instead of against them, would be the best option. The hard problem, as you mentioned, is how do we do that?
(I’ll give the chapter a read today—if my power manages to stay on! [there’s a Nor’easter hitting where I live]).
I think what has changed since 2010 has been general awareness of transcending human limits as a realistic possibility. Outside of ML researchers, MIRI and the rationality community, who back then considered AGI reshaping society in our lifetimes a serious possibility?
There is a very real psychological difference between the way the average human sees “sci-fi risks” (alien invasion, asteroids, Cthulhu rising) vs. realistic ones (war, poverty, recession, climate change). In 2010 AI was a sci-fi risk, in 2024 it is a realistic one. Most humans are still struggling with that transition, and we are getting technically closer to reaching AGI. This is extremely dangerous.
I hope you are okay with the storm! Good luck there. And indeed, figuring out how to work with ones evolutionary tendencies is not always straightforward. For many personal decisions this is easier, such as recognising that sitting 10 hours a day at the desk is not what our bodies have evolved for. “So let’s go for a run!” If it comes to large scale coordination, however, things get trickier...
”I think what has changed since 2010 has been general awareness of transcending human limits as a realistic possibility.” → I agree with this and your following points.