Technoprogressive, biocosmist, rationalist, defensive accelerationist, longtermist
Matrice Jacobine
Didn’t really want to in depth go beyond what @Ozzie Gooen already said and mentioning the event that originally prompted that line of thought, but added a link to @David Thorstad’s sequence on the subject.
GiveWell specifically was started with a focus on smaller donors, but there was a always a separation between them and EA.
… I’m confused by what you would mean by early EA then? As the history of the movement is generally told it started by the merger of three strands: GiveWell (which attempt to make charity effectiveness research available for well-to-do-but-not-Bill-Gates-rich Westerners), GWWC (which attempt to convince well-to-do-but-not-Bill-Gates-rich Westerners to give to charity too), and the rationalists and proto-longtermists (not relevant here).
Criticisms of ineffective charities (stereotypically, the Make a Wish Foundation) could be part of that, but they’re specifically the charities well-to-do-but-not-Bill-Gates-rich Westerners tend to donate to when they do donate, I don’t think people were going out claiming the biggest billionaire philanthropic foundations (like, say, well, the Bill Gates Foundation) didn’t knew what to do with their money.
I have said this in other spaces since the FTX collapse: The original idea of EA, as I see it, was that it was supposed to make the kind of research work done at philanthropic foundations open and usable for well-to-do-but-not-Bill-Gates-rich Westerners. While it’s inadvisable to outright condemn billionaires using EA work to orient their donations for… obvious reasons, I do think there is a moral hazard in billionaires funding meta EA. Now, the most extreme policy would be to have meta EA be solely funded by membership dues (as plenty organizations are!). I’m not sure if that would really be workable for the amounts of money involved, but some kind of donation cap could be plausibly envisaged.
There is an actual field called institutional development economics which has won a great chunk of Nobel Prizes and which already has a fairly good grasp of what it takes to get poor countries to develop. The idea that you could learn more about that without engaging in the field in the slightest but by… trying to figure out how to get rich countries with the institutional framework and problems of rich countries richer and assume that this will be magically applied (by who?) to poor countries with the institutional framework and problems of poor countries and work the same is just… straight-up obvious complete nonsense.
I contend. OP (no pun intended) cites both the Abundance Institute and Progress Studies as inspiration, a cursory look at the think tank sponsors and affiliations of the people involved in those show that they are mostly a libertarian-ish right-of-center bunch.
Recent advances in LLMs have led me to update toward believing that we live in the world where alignment is easy (i.e. CEV naturally emerge from LLMs, and future AI agents will be based on understanding and following natural language commands by default), but governance is hard (i.e. AI agents might be co-opted by governments or corporations to lock in humanity in a dystopian future, and the current geopolitical environment, characterized by democratic backsliding, cold war mongering, and an increase in military conflicts including wars of aggression, isn’t conducive to robust multilateral governance).
Additionally, at the meta-advocacy level, EA will suffer insofar as the bureaucracy is drained of talent. This will be particularly acute for anything touching on areas with heavy federal involvement, like public health, biosecurity, or foreign aid/policy.[3]
This may be the one silver lining actually? There is potentially now going to be a growing amount of low-hanging fruit for EAs to hire who are simultaneously value-aligned and technocratically-minded. The thing I’m most worried on the meta-advocacy side is hostile takeover, as we were discussing with @Bob Jacobs here.
Yeah IIRC I think EY do consider himself to have been net-negative overall so far, hence the whole “death with dignity” spiral. But I don’t think one can claim his role has been more negative than OPP/GV deciding to bankroll OpenAI and Anthropic (at least when removing the indirect consequences due to him having influenced the development of EA in the first place).
I don’t think you’re alone at all. EY and other prominent rationalists (like LW webmaster Habryka) have also said they believe EA has been net-negative for human survival for quite a while already, EleutherAI’s Connor Leahy has recently released the strongly EA-critical Compendium, which has been praised by many leading longtermists, particularly FLI’s Max Tegmark, and Anthropic’s recent antics like calling for recursive self-improvement to beat China is definitely souring a lot of people left unconvinced in those spaces on OP. From personal conservations, I can tell you PauseAI in particular is increasingly hostile to EA leadership.
While this is a good argument against it indicating governance-by-default (if people are saying that), securing longtermist funding to work with the free software community over this (thus overcoming two of the three hurdles) still seems to be a potentially very cost-effective way to reduce AI risk to look into, particularly combined with differential technological development of AI defensive v. offensive capacities.
Demonstrating specification gaming in reasoning models
US AI Safety Institute will be ‘gutted,’ Axios reports
It increases the AI arms race thus shortening AGI timelines, and, after AGI, increases chances of the singleton being either unaligned or technically aligned to being an AGI dictatorship or other kind of dystopian outcome.
Conditional on AGI happening under this administration, how much AGI companies have embedded with the national security state is a crux for the future of the lightcone, and I don’t expect institutional inertia (the reasons why one would expect “the US might recover relatively quickly from its current disaster” and “the US to remain somewhat less dictatorial than China even in the worst outcomes”) to hold if AGI dictatorship is a possibility for the powers that be to reach for.
(other than ideally stopping its occasional minor contributions to it via the right-wing of rationalism and being clear-eyed about what the 2nd Trump admin might mean for things like “we want democratic countries to beat China”)
Actually I think this is the one thing that EAs could realistically do as their comparative advantage, considering who they are socially and ideologically adjacent to, if they are afraid of AGI being reached under an illiberal, anti-secular, and anti-cosmopolitan administration: to be blunt, press Karnofsky and Amodei to shut up about “entente” and “realism” and cut ties with Thiel-aligned national security state companies like Palantir.
I’m not sure why you think we disagree, my “(by who?)” parenthetical was precisely pointing out that if poor countries aren’t better-run it’s not because it’s not known what works for developing poor countries (state capacity, land reform, industrial policy), it’s that the elites of those countries (dictators, generals, warlords, semi-feudal landlords and tribal chiefs; what Acemoglu and Robinson call “extractive institutions”) are generally not incentive-aligned with the general well-being of the population and indeed are the ones who actively benefit from state capacity failures and rent-seeking in the first place.
I however don’t see much reason to think that bringing back robust social democracy in developed countries is going to conclusively solve that (the golden age of social democracy certainly seemed to be compatible with desperately holding onto old colonial empires and then first propping up those very extractive institutions after formal decolonization under the guise of the Cold War), nor that the progress studies/abundance agenda people (mostly from bipartisan or conservative-leaning think tanks with ties to tech corporations and Peter Thiel in particular) seem to be particularly interested in bringing back robust social democracy in the first place.