Your writeup is useful to me, as my casual reading into geopolitics and power have recently had me thinking more about Samo’s work, and how little of it I currently understand. It’s great to have a broader sense of his ideas before I dig into more of his individual articles and videos.
Having identified him to be a valuable expert whose opinions and work have overlap with EA, it surprised me that your post is among the few mentions of him on the forum. You have done great work to formulate an introduction to his ideas.
When you say,
“He believes that the lack of functional institutions, combined with their significant dependence on each other, creates systemic risks that significant technologies and capabilities will be lost by society. I suspect he sees this from a more longtermist frame, wherein he believes functional institutions should attempt to safeguard these capabilities for the long-term. As opposed to say an AI researcher’s frame that assumes we’ll deploy aligned AI this century with high probability and then all this won’t matter.”
This reminds me that I am interested in his knowledge and interest on AI trends, as he is a civilizational collapse theorist. It seems he is aware of the alignment problem, and briefly spoke about AI Governance strategies in this video. He stresses the importance of concrete steps, and I have attempted to un-rigorously summarize some of them:
1. He foresees the creation of some kind of institution, like an “AI Scientist Association”, that identifies the highest risk forms of AI research. 2. He mentions surveillance of AI development, and asks the technical question of whether we can effectively monitor for this. 3. He expects Governments will try to use software-regulating software, if point 2 becomes feasible. 4. Over a longer timeframe, he sees international cooperation as important, with direct China-US academic collaboration likely resisting disruption attempts — short of intentional political will.
I currently don’t know enough about AI Government to know if better ideas exist in this space within EA. As the talk was brief, and from 2019, I suspect both his ideas and the rest of the community have progressed a lot further since then. Please correct me if I’m wrong, as I want to learn more.
Very nice piece!
Your writeup is useful to me, as my casual reading into geopolitics and power have recently had me thinking more about Samo’s work, and how little of it I currently understand. It’s great to have a broader sense of his ideas before I dig into more of his individual articles and videos.
Having identified him to be a valuable expert whose opinions and work have overlap with EA, it surprised me that your post is among the few mentions of him on the forum. You have done great work to formulate an introduction to his ideas.
When you say,
This reminds me that I am interested in his knowledge and interest on AI trends, as he is a civilizational collapse theorist. It seems he is aware of the alignment problem, and briefly spoke about AI Governance strategies in this video. He stresses the importance of concrete steps, and I have attempted to un-rigorously summarize some of them:
1. He foresees the creation of some kind of institution, like an “AI Scientist Association”, that identifies the highest risk forms of AI research.
2. He mentions surveillance of AI development, and asks the technical question of whether we can effectively monitor for this.
3. He expects Governments will try to use software-regulating software, if point 2 becomes feasible.
4. Over a longer timeframe, he sees international cooperation as important, with direct China-US academic collaboration likely resisting disruption attempts — short of intentional political will.
I currently don’t know enough about AI Government to know if better ideas exist in this space within EA. As the talk was brief, and from 2019, I suspect both his ideas and the rest of the community have progressed a lot further since then. Please correct me if I’m wrong, as I want to learn more.