I responded well to Richard’s call for More Co-operative AI Safety Strategies, and I like the call toward more sociopolitical thinking, since the Alignment problem really is a sociological one at heart (always has been). Things which help the community think along these lines are good imo, and I hope to share some of my own writing on this topic in the future.
I don’t think it was always a large sociological problem, but yeah I’ve updated more towards the sociological aspect of alignment being important (especially as the technical problem has become easier than circa 2008-2016 views had).
Whether or not I agree with Richard’s personal politics or not is kinda beside the point to this as a message. Richard’s allowed to have his own views on things and other people are allowed to criticse this (I think David Mathers’ comment is directionally where I lean too). I will say that not appreciating arguments from open-source advocates, who are very concerned about the concentration of power from powerful AI, has lead to a completely unnecessary polarisation against the AI Safety community from it. I think, while some tensions do exist, it wasn’t inevitable that it’d get as bad as it is now, and in the end it was a particularly self-defeating one. Again, by doing the kind of thinking Richard is advocating for (you don’t have to co-sign with his solutions, he’s even calling for criticism in the post!), we can hopefully avoid these failures in the future.
I do genuinely believe that concentration of power is a huge risk factor, and in particular I’m deeply worried about the incentives of a capitalist post-AGI company where a few hold basically all of the rent/money, and given both stronger incentives to expropriate property from people, similar to how humans expropriate property from animals routinely, combined with weak to non-existent forces against expropriation of property.
That said, I think the piece on open-source AI being a defense against concentration of power and more generally a good thing akin to the enlightment unfortunately has some quite bad analogies, when giving everyone AI, depending on how powerful it is basically at the high end is enough to create entire very large economies on their own, and at the lower end help immensely/automate the process of biological weapons to common citizens is nothing like education/voting, and more importantly the impacts fundamentally require coordination to get large things done, which super-powerful AIs can remove.
More generally, I think one of the largest cruxes with reasonable open-source people and EAs in general is how much they think AIs can make biology capable for the masses, and how offense dominant is the tech, and here I defer to biorisk experts, including EAs that generally think that biorisk is a wildly offense advantaged domain that is very dangerous to democratize, compared to open source people for at least several years.
On Sam Altman’s firing:
On the bounties, the one that really interests me is the OpenAI board one. I feel like I’ve been living in a bizarro-world with EAs/AI Safety People ever since it happened because it seemed such a collosal failure, either of legitimacy or strategy (most likely both), and it’s a key example of the “un-cooperative strategy” that Richard is concerned about imo. The combination of extreme action and ~0 justification either externally or internally remains completely bemusing to me and was big wake-up call for my own perception of ‘AI Safety’ as a brand. I don’t think people can underestimate the second-impact effect this bad on both ‘AI Safety’ and EA, coming about a year after FTX.
I’ll be on the blunt end and say it, in that I think was mildly good or at worst neutral to use the uncooperative strategy to fire Sam Altman, because Sam Altman was going to gain all control by default and probably have better PR if the firing didn’t happen, and more importantly he was aiming to disempower the safety people basically totally, which leads to at least a mild increase in existential risk, and they realized they would have been manipulated out of it if they waited, so they had to go for broke.
The main EA mistake was in acting too early, before things got notably weird.
That doesn’t mean society will react or that it’s likely to react, but I basically agree with Veaulans here:
Some thoughts on this comment:
On this part:
I don’t think it was always a large sociological problem, but yeah I’ve updated more towards the sociological aspect of alignment being important (especially as the technical problem has become easier than circa 2008-2016 views had).
I do genuinely believe that concentration of power is a huge risk factor, and in particular I’m deeply worried about the incentives of a capitalist post-AGI company where a few hold basically all of the rent/money, and given both stronger incentives to expropriate property from people, similar to how humans expropriate property from animals routinely, combined with weak to non-existent forces against expropriation of property.
That said, I think the piece on open-source AI being a defense against concentration of power and more generally a good thing akin to the enlightment unfortunately has some quite bad analogies, when giving everyone AI, depending on how powerful it is basically at the high end is enough to create entire very large economies on their own, and at the lower end help immensely/automate the process of biological weapons to common citizens is nothing like education/voting, and more importantly the impacts fundamentally require coordination to get large things done, which super-powerful AIs can remove.
More generally, I think one of the largest cruxes with reasonable open-source people and EAs in general is how much they think AIs can make biology capable for the masses, and how offense dominant is the tech, and here I defer to biorisk experts, including EAs that generally think that biorisk is a wildly offense advantaged domain that is very dangerous to democratize, compared to open source people for at least several years.
On Sam Altman’s firing:
I’ll be on the blunt end and say it, in that I think was mildly good or at worst neutral to use the uncooperative strategy to fire Sam Altman, because Sam Altman was going to gain all control by default and probably have better PR if the firing didn’t happen, and more importantly he was aiming to disempower the safety people basically totally, which leads to at least a mild increase in existential risk, and they realized they would have been manipulated out of it if they waited, so they had to go for broke.
The main EA mistake was in acting too early, before things got notably weird.
That doesn’t mean society will react or that it’s likely to react, but I basically agree with Veaulans here:
https://x.com/veaulans/status/1890245459861729432