Thanks very much for putting this together. This section stood out to me —
He is however optimistic on innovation in new social technologies and building new institutions. He believes that there are very few functional institutions and that most institutions are attempts at mimicking these functional institutions. He believes innovation in social technology is highly undersupplied today, and that individual founders have a significant shot at building them. He also believes that civilisation makes logistical jumps in complexity and scale in very short periods of time when such innovation happens. He believes this has happened in the past, and believes it is possible today. In short, that this is veryhigh impact, and deserves a lot more people working on it than currently are.
Makes me think of some of the work of RadicalxChange, and also 80k’s recent interview with Audrey Tang. Curious what Samo’s take might be on either of those things.
I think this is something that the EA community is doing already and maybe could/should do even more. Many of my smartest (non EA) friends from college work in rent-seeking sectors / sectors that have neutral production value (not E2G). This seems to be an incredible waste of resources, since they could also work on the most pressing problems.
One interesting question could be: Are there tractable ways to do status engineering with the large group of talented non EAs? I think this could be worthwhile doing, because obviously not all incredibly smart people are part / want to be part of the EA community.
I believe Sam Harris is working on an NFT project for people having taken the GWWC pledge, so that would be one example.
Academia seems like the highest leverage place one could focus on. Universities are to a large extent social status factories, and so aligning the status conferred by academic learning and research with EA objectives (for example, by creating an ‘EA University’) could be very high impact. Also relates to the point about ‘institutions.’
I’ll be recording a podcast with Samo on the 9th of March. We’ll discuss these themes as well as the consequences and possible solutions to underpopulation.
Your writeup is useful to me, as my casual reading into geopolitics and power have recently had me thinking more about Samo’s work, and how little of it I currently understand. It’s great to have a broader sense of his ideas before I dig into more of his individual articles and videos.
Having identified him to be a valuable expert whose opinions and work have overlap with EA, it surprised me that your post is among the few mentions of him on the forum. You have done great work to formulate an introduction to his ideas.
When you say,
“He believes that the lack of functional institutions, combined with their significant dependence on each other, creates systemic risks that significant technologies and capabilities will be lost by society. I suspect he sees this from a more longtermist frame, wherein he believes functional institutions should attempt to safeguard these capabilities for the long-term. As opposed to say an AI researcher’s frame that assumes we’ll deploy aligned AI this century with high probability and then all this won’t matter.”
This reminds me that I am interested in his knowledge and interest on AI trends, as he is a civilizational collapse theorist. It seems he is aware of the alignment problem, and briefly spoke about AI Governance strategies in this video. He stresses the importance of concrete steps, and I have attempted to un-rigorously summarize some of them:
1. He foresees the creation of some kind of institution, like an “AI Scientist Association”, that identifies the highest risk forms of AI research. 2. He mentions surveillance of AI development, and asks the technical question of whether we can effectively monitor for this. 3. He expects Governments will try to use software-regulating software, if point 2 becomes feasible. 4. Over a longer timeframe, he sees international cooperation as important, with direct China-US academic collaboration likely resisting disruption attempts — short of intentional political will.
I currently don’t know enough about AI Government to know if better ideas exist in this space within EA. As the talk was brief, and from 2019, I suspect both his ideas and the rest of the community have progressed a lot further since then. Please correct me if I’m wrong, as I want to learn more.
Thanks very much for putting this together. This section stood out to me —
Makes me think of some of the work of RadicalxChange, and also 80k’s recent interview with Audrey Tang. Curious what Samo’s take might be on either of those things.
Interesting thoughts, apart from the sections finm mentioned this one stood out to me as well:
status engineering—redirecting social status towards productive ends (for instance on Elon Musk making engineers high status)
I think this is something that the EA community is doing already and maybe could/should do even more. Many of my smartest (non EA) friends from college work in rent-seeking sectors / sectors that have neutral production value (not E2G). This seems to be an incredible waste of resources, since they could also work on the most pressing problems.
One interesting question could be: Are there tractable ways to do status engineering with the large group of talented non EAs? I think this could be worthwhile doing, because obviously not all incredibly smart people are part / want to be part of the EA community.
I believe Sam Harris is working on an NFT project for people having taken the GWWC pledge, so that would be one example.
Academia seems like the highest leverage place one could focus on. Universities are to a large extent social status factories, and so aligning the status conferred by academic learning and research with EA objectives (for example, by creating an ‘EA University’) could be very high impact. Also relates to the point about ‘institutions.’
Thanks for making this list.
I’ll be recording a podcast with Samo on the 9th of March. We’ll discuss these themes as well as the consequences and possible solutions to underpopulation.
Thank you for having Samo on the podcast, Gus. I find him tremendously insightful, and I eagerly look forward to hearing what he has to say.
Thanks, me too.
If you have any questions for Samo, you could write them here.
Very nice piece!
Your writeup is useful to me, as my casual reading into geopolitics and power have recently had me thinking more about Samo’s work, and how little of it I currently understand. It’s great to have a broader sense of his ideas before I dig into more of his individual articles and videos.
Having identified him to be a valuable expert whose opinions and work have overlap with EA, it surprised me that your post is among the few mentions of him on the forum. You have done great work to formulate an introduction to his ideas.
When you say,
This reminds me that I am interested in his knowledge and interest on AI trends, as he is a civilizational collapse theorist. It seems he is aware of the alignment problem, and briefly spoke about AI Governance strategies in this video. He stresses the importance of concrete steps, and I have attempted to un-rigorously summarize some of them:
1. He foresees the creation of some kind of institution, like an “AI Scientist Association”, that identifies the highest risk forms of AI research.
2. He mentions surveillance of AI development, and asks the technical question of whether we can effectively monitor for this.
3. He expects Governments will try to use software-regulating software, if point 2 becomes feasible.
4. Over a longer timeframe, he sees international cooperation as important, with direct China-US academic collaboration likely resisting disruption attempts — short of intentional political will.
I currently don’t know enough about AI Government to know if better ideas exist in this space within EA. As the talk was brief, and from 2019, I suspect both his ideas and the rest of the community have progressed a lot further since then. Please correct me if I’m wrong, as I want to learn more.