Technoprogressive, biocosmist, rationalist, defensive accelerationist, longtermist
Matrice Jacobine
If you mean Meta and Mistral I agree. I trust EleutherAI and probably DeepSeek to not release such models though, and they’re more centrally who I meant.
This isn’t really the best example to use considering AI image generation is very much the one area where all the most popular models are open-weights and not controlled by big tech companies, so any attempt at regulating AI image generation would necessarily mean concentrating power and antagonizing the free and open source software community (something which I agree with OP is very ill-advised), and insofar as AI-skeptics are incapable of realizing that, they aren’t reliable.
Yeah, this feel particularly weird because, coming from that kind of left-libertarian-ish perspective, I basically agree with most of it but also every time he tries to talk about object-level politics it feels like going into the bizarro universe and I would flip the polarity of the signs of all of it. Which is an impression I generally have with @richard_ngo’s work in general, him being one of the few safetyists on the political right to not have capitulated to accelerationism-because-of-China (as most recently even Elon did). Still, I’ll try to see if I have enough things to say to collect bounties.
I wasn’t even contrasting “moral alignment” with “aligning to the creator’s specific intent [i.e. his individual coherent extrapolated volition]”, but with just “aligning with what the creator explicitly specified at all in the first place” (“inner alignment”?), which is implicitly a solved problem in the paperclip maximizer thought experiment if the paperclip company can specify “make as many paperclips as possible”, and is very much not a solved problem in LLMs.
For the record, as someone who was involved in AI alignment spaces well before it became mainstream, my impression was that, before the LLM boom, “moral alignment” is what most people understood AI alignment to mean, and what we now call “technical alignment” would have been considered capabilities work. (Tellingly, the original “paperclip maximizer” thought experiment by Nick Bostrom assumes a world where what we now call “technical alignment” [edit: or “inner alignment”?] is essentially solved and a paperclip company can ~successfully give explicit natural language goals to its AI to maximize.)
In part this may be explained by updating on the prospect of LLMs becoming the route to AGI (with the lack of real utility function making technical alignment much harder than we thought, while natural language understanding, including of value-laden concepts, seems much more central to machine intelligence than we thought), but the incentives problem of AI alignment work being increasingly made under the influence of first OpenAI then OPP-backed Anthropic is surely a part of it.
Why Did Elon Musk Go After Bunkers Full of Seeds?
(I suspect this explain like half of Émile Torres’ deal.)
I’m not sure why you think we disagree, my “(by who?)” parenthetical was precisely pointing out that if poor countries aren’t better-run it’s not because it’s not known what works for developing poor countries (state capacity, land reform, industrial policy), it’s that the elites of those countries (dictators, generals, warlords, semi-feudal landlords and tribal chiefs; what Acemoglu and Robinson call “extractive institutions”) are generally not incentive-aligned with the general well-being of the population and indeed are the ones who actively benefit from state capacity failures and rent-seeking in the first place.
I however don’t see much reason to think that bringing back robust social democracy in developed countries is going to conclusively solve that (the golden age of social democracy certainly seemed to be compatible with desperately holding onto old colonial empires and then first propping up those very extractive institutions after formal decolonization under the guise of the Cold War), nor that the progress studies/abundance agenda people (mostly from bipartisan or conservative-leaning think tanks with ties to tech corporations and Peter Thiel in particular) seem to be particularly interested in bringing back robust social democracy in the first place.
Didn’t really want to in depth go beyond what @Ozzie Gooen already said and mentioning the event that originally prompted that line of thought, but added a link to @David Thorstad’s sequence on the subject.
GiveWell specifically was started with a focus on smaller donors, but there was a always a separation between them and EA.
… I’m confused by what you would mean by early EA then? As the history of the movement is generally told it started by the merger of three strands: GiveWell (which attempt to make charity effectiveness research available for well-to-do-but-not-Bill-Gates-rich Westerners), GWWC (which attempt to convince well-to-do-but-not-Bill-Gates-rich Westerners to give to charity too), and the rationalists and proto-longtermists (not relevant here).
Criticisms of ineffective charities (stereotypically, the Make a Wish Foundation) could be part of that, but they’re specifically the charities well-to-do-but-not-Bill-Gates-rich Westerners tend to donate to when they do donate, I don’t think people were going out claiming the biggest billionaire philanthropic foundations (like, say, well, the Bill Gates Foundation) didn’t knew what to do with their money.
I have said this in other spaces since the FTX collapse: The original idea of EA, as I see it, was that it was supposed to make the kind of research work done at philanthropic foundations open and usable for well-to-do-but-not-Bill-Gates-rich Westerners. While it’s inadvisable to outright condemn billionaires using EA work to orient their donations for… obvious reasons, I do think there is a moral hazard in billionaires funding meta EA. Now, the most extreme policy would be to have meta EA be solely funded by membership dues (as plenty organizations are!). I’m not sure if that would really be workable for the amounts of money involved, but some kind of donation cap could be plausibly envisaged.
There is an actual field called institutional development economics which has won a great chunk of Nobel Prizes and which already has a fairly good grasp of what it takes to get poor countries to develop. The idea that you could learn more about that without engaging in the field in the slightest but by… trying to figure out how to get rich countries with the institutional framework and problems of rich countries richer and assume that this will be magically applied (by who?) to poor countries with the institutional framework and problems of poor countries and work the same is just… straight-up obvious complete nonsense.
I contend. OP (no pun intended) cites both the Abundance Institute and Progress Studies as inspiration, a cursory look at the think tank sponsors and affiliations of the people involved in those show that they are mostly a libertarian-ish right-of-center bunch.
Recent advances in LLMs have led me to update toward believing that we live in the world where alignment is easy (i.e. CEV naturally emerge from LLMs, and future AI agents will be based on understanding and following natural language commands by default), but governance is hard (i.e. AI agents might be co-opted by governments or corporations to lock in humanity in a dystopian future, and the current geopolitical environment, characterized by democratic backsliding, cold war mongering, and an increase in military conflicts including wars of aggression, isn’t conducive to robust multilateral governance).
Additionally, at the meta-advocacy level, EA will suffer insofar as the bureaucracy is drained of talent. This will be particularly acute for anything touching on areas with heavy federal involvement, like public health, biosecurity, or foreign aid/policy.[3]
This may be the one silver lining actually? There is potentially now going to be a growing amount of low-hanging fruit for EAs to hire who are simultaneously value-aligned and technocratically-minded. The thing I’m most worried on the meta-advocacy side is hostile takeover, as we were discussing with @Bob Jacobs here.
Yeah IIRC I think EY do consider himself to have been net-negative overall so far, hence the whole “death with dignity” spiral. But I don’t think one can claim his role has been more negative than OPP/GV deciding to bankroll OpenAI and Anthropic (at least when removing the indirect consequences due to him having influenced the development of EA in the first place).
I don’t think you’re alone at all. EY and other prominent rationalists (like LW webmaster Habryka) have also said they believe EA has been net-negative for human survival for quite a while already, EleutherAI’s Connor Leahy has recently released the strongly EA-critical Compendium, which has been praised by many leading longtermists, particularly FLI’s Max Tegmark, and Anthropic’s recent antics like calling for recursive self-improvement to beat China is definitely souring a lot of people left unconvinced in those spaces on OP. From personal conservations, I can tell you PauseAI in particular is increasingly hostile to EA leadership.
I think you’re interpreting as ascendancy what is mostly just Silicon Valley realigning to the Republican Party (which is more of a return to the norm both historically and for US industrial lobbies in general). None of the Democrats you cite are exactly rising stars right now.