I’m a computational physicist, I generally donate to global health. I am skeptical of AI x-risk and of big R Rationalism, and I intend explaining why in great detail.
titotal
For the record, while I don’t think your original post was great, I agree with you on all three points here. I don’t think you’re the only one noticing a lack of engagement on this forum, which seems to only get active whenever EA’s latest scandal drops.
I think there’s an inherent limitation to the number of conservatives that EA can appeal to, because the fundamental values of EA are strongly in the liberal tradition. For example, if you believe the five foundations theory of moral values (which I think has at least a grain of truth to it), conservatives value tradition, authority and purity far more than liberals or leftists do: and in EA these values are (correctly, imo) not included as specific endgoals. An EA and a conservative might still end up agreeing on preserving certain traditions, but the EA will be doing so as a means to an end of increasing the general happiness of the population, not as a goal in of itself.
Even if you’re skeptical of these models of values, you can just look at a bunch of cultural factors that would be offputting to the run-of the mill conservative: EA is respectful of LGBT people including respecting transgender individuals and their pronouns, they have a large population of vegans and vegetarians, they say you should care about far off Africans just as much as your own neighbours.
As a result of this, when EA and adjacent groups tries to be welcoming to conservatives, they don’t end up getting your trump-voting uncle: they get unusual conservatives, such as mencius moldbug and the obsessive race-IQ people (the manifest conference had a ton of these). These are a small group of people and are by no means the majority, but even their presence in the general vicinity of EA is enough to disgust and deter many people from the movement.
This puts EA in the worst of both worlds politically: the group of people that are comfortable with tolerating both trans people and scientific racists is miniscule, and it seriously hampers the ability to expand beyond the Sam Harris demographic. I think a better plan is to not compromise on progressive values, but be welcoming to differences on the economic front.
I’d say a big problem with trying to make the forum a community space is that it’s just not a lot of fun to post here. The forum has a dry and serious tone and voice that emulates that of academic papers, which communicates that this is a place for posting Serious and Important articles, while attempts at levity or informality often get downvoted, and god forbid you don’t write in perfect grammatically correct English. Sometimes when I’m posting here I feel a pressure to act like a robot, which is not exactly conducive to community bonding.
I didn’t downvote you (and actually agree with you), but I’m assuming that the people who did justify it by the combative tone of your writing.
Personally I think the forums are way too policing of overall tone. It punishes newcomers for not “learning” the dominant way of speaking (with the side-effect of punishing non native english speakers), and also deters things like humour that make a place actually pleasant to spend time around.
We do have an on record prediction from Yudkowsky, actually: in 1999 he predicted that drexler-style nanotech would arrive in 2010 (using some quite embarrassing reasoning, if I may comment as a physicist). He was also predicting that nanotech would “powerful enough to destroy the planet”, which is why he wanted to build the singularity himself, something he thought the singularity institute could accomplish by 2008.
This seems to be an instance of crying wolf by literally the exact same person that is crying wolf today.
According to this article, CEO shooter Luigi Malgione:
really wanted to meet my other founding members and start a community based on ideas like rationalism, Stoicism, and effective altruism
Doesn’t look he was part of the EA movement proper (which is very clear about nonviolence), but could EA principles have played a part in his motivations, similarly to SBF?
When I answered this question, I answered it with an implied premise that an EA org is making these claims about the possibilities, and went for number 1, because I don’t trust EA orgs to be accurate in their “1.5%” probability estimates, and I expect these to be more likely overestimates than underestimates.
Though I think it would be a grave mistake to conclude from the fact that ChatGPT mostly complies with developer and user intent that we have any reliable way of controlling an actual machine superintelligence. The top researchers in the field say we don’t
The link you posted does not support your claim. The 24 authors of the linked paper contains some top AI researchers like Geoffrey Hinton and Stuart Russell, but it obviously does not contain all of them, and is obviously not a representative sample. It also contains people with limited expertise in the subject, including a psychologist and a medieval historian.
In regards to your overall point, it does not rebuts the idea that some people have been cynically exploiting AI fears for their own gain. I mean, remember that OpenAI was founded as an AI safety organisation. The actions of Sam Altman seem entirely consistent with someone hyping X-risk in order to get funding and support for OpenAI, then pivoting to downplaying risk as soon as ditching safety gets more profit. I doubt this applies to all people or even the majority, but it does seem like it’s happened at least once.
The EA space in general has fairly weak defenses against ideas that sound persuasive but don’t actually hold up to detailed scrutiny. An initiative like this, if implemented correctly, seems like a step in the right direction.
I find it unusual that this end of year review contains barely any details of things you’ve actually done this year. Why should donors consider your organization as opposed to other AI risk orgs?
“It seems hard to predict whether superintelligence will kill everyone or not, but there’s a worryingly high chance it will, and Earth isn’t prepared,” and seems to think the latter framing is substantially driven by concerns about what can be said “in polite company.”
Funnily enough, I think this is true in the opposite direction. There is massive social pressure in EA spaces to take AI x-risk and the doomer arguments seriously. I don’t think it’s uncommon for someone who secretly suspects it’s all a load of nonsense to diplomatically say a statement like the above, in “polite EA company”.
Like you: I urge people who think AI x-risk is overblown to make their arguments loudly and repeatedly.
To be clear, Thorstadt has written around a hundred different articles critiquing EA positions in depth, including significant amounts of object level criticism.
I find it quite irritating that no matter how much in depth object level criticism people like Thorstadt or I make, if we dare to mention meta-level problems at all we often get treated like rabid social justice vigilantes. This is just mud-slinging: both meta level and object level issues are important for the epistemological health of the movement.
Pause AI seems to not be very good at what they are trying to do. For example, this abysmal press release which makes pause AI sound like tinfoil wearing nutjobs, which I already complained about it in the comments here.
I think they’ve been coasting for a while on the novelty of what they’re doing, which helps obscure that only like a dozen or so people are actually showing up to these protests, making them an empty threat. This is unlikely to change as long as the focus of these protests are based on the highly speculative threat of AI x-risk, which people do not viscerally feel as a threat and does not carry authoritative scientific backing compared to something like climate change. People might say they’re concerned about AI on surveys, but they aren’t going to actually hit the streets unless they think it’s meaningfully and imminently going to harm them.
In todays climate, the only way to build a respectably sized protest movement is to put x-risk on the backburner and focus on attacking AI more broadly: there are a lot of people who are pissed at gen-AI in general, like people mad at data plagiarism, job loss and enshittification. They are making some steps towards this, but I think there’s a feeling that doing so would end up aligning them politically with the left and make enemies among AI companies. They should either embrace this, or give up on protesting entirely.
I’m worried that a lot of these “questions” seem like you’re trying to push a belief, but phrasing it like a question in order to get out of actually providing evidence for said belief.
Why has Open Philanthropy decided not to invest in genetic engineering and reproductive technology, despite many notable figures (especially within the MIRI ecosystem) saying that this would be a good avenue to work in to improve the quality of AI safety research?
First, AI safety people here tend to think that super-AI is imminent within a decade or so, so none of this stuff would kick in time. Second, this stuff is a form of eugenics which has a fairly bad reputation, and raises thorny ethical issues even divorced from it’s traditional role in murder and genocide. Third, it’s all untested and based on questionable science and i suspect it wouldn’t actually work very well, if at all.
Has anyone considered possible perverse incentives that the aforementioned CEA Community Health team may experience, in that they may have incentives to exaggerate problems in the community to justify their own existence? If so, what makes CEA as a whole think that their continued existence is worth the cost?
Have you considered that the rest of EA is incentivised to pretend there aren’t problems in EA, for reputational reasons? If so, why shouldn’t community health be expanded instead of reduced?
This question is basically just a baseless accusation rephrased into a question in order to get away with it. I can’t think of a major scandal in EA that was first raised by the community health team.
Why have so few people, both within EA and within popular discourse more broadly, drawn parallels between the “TESCREAL” conspiracy theory and antisemitic conspiracy theories?
Because this is a dumb and baseless parallel? There’s a lot more to antisemitic conspiracy theories than “powerful people controlling things”. In fact, the general accusation used by Torres is to associate TESCREAL with white supremacist eugenicists, which feels kinda like the opposite end of the scale
Why aren’t there more organizations within EA that are trying to be extremely hardcore and totalizing, to the level of religious orders, the Navy SEALs, the Manhattan Project, or even a really intense start-up? It seems like that that is the kind of organization you would want to join, if you truly internalize the stakes here.
Because this is a terrible idea, and on multiple occasions has already led to harmful cult-like organisations. AI safety people have already spilled a lot of ink about why a maximising AI would be extremely dangerous, so why the hell would you want to do maximising yourself?
For as long as it’s existed the “AI safety” movement has been trying to convince people that superintelligent AGI is imminent and immensely powerful. You can’t act all shocked pikachu that some people would ignore the danger warnings and take that as a cue to build it before someone else does. This was all quite a predictable result of your actions.
I would like to humbly suggest that people not engage in active plots to destroy humanity based on their personal back of the envelope moral calculations.
I think that the other 8 billion of us might want a say, and I’d guess we’d not be particularly happy if we got collectively eviscerated because some random person made a math error.
On multiple occasions, I’ve found a “quantified” analysis to be indistinguishable from a “vibes-based” analysis: you’ve just assigned those vibes a number, often one basically pulled out of your behind. (I haven’t looked enough into shrimp to know if this is one of those cases).
I think it is entirely sensible to strongly prefer cause estimates that are backed by extremely strong evidence such as meta-reviews of randomised trials, rather than cause estimates based on vibes that are essentially made up. Part of the problem I have with naive expected value reasoning is that it seemingly does not take this entirely reasonable preference into account.
- Reasons-based choice and cluelessness by Feb 7, 2025, 10:21 PM; 42 points) (
- Reasons-based choice and cluelessness by Feb 7, 2025, 10:21 PM; 34 points) (LessWrong;
- Nov 20, 2024, 4:07 AM; 14 points) 's comment on Refusing to Quantify is Refusing to Think (about trade-offs) by (
I have a PHD on computational quantum chemistry (ie, using conventional computers to simulate quantum systems). In my opinion quantum technologies are unlikely to be a worthy cause area. I have not researched everything in depth so I can only give my impressions here from conversations with colleagues in the area.
First, I think the idea of quantum computers having any effect on WMD’s in the near future seems dodgy to me. Even if practical quantum computers are built, they are still likely to be incredibly expensive for a long time to come. People seem unsure about how useful quantum algorithms will actually be for material science simulations. We can build approximations to compounds that run fine on classical computers, and even if quantum computers opens up more approximations, you’re still going to have to check in with real experiments. You are also operating in an idealised realm: you can model the compounds, yes, but if you want to investigate, say, it’s effect on humans, you need to model the human body as well, which is an entirely different beast.
The next point is that even if this does work in the future, why not put the money to investigate it then, rather than now, before it’s been proven to work? We will have a ton of advance warning if quantum computers can actually be used for practical purposes, because they will start off really bad and develop over time.
From what I’ve heard, theres a lot of skepticism about near-term quantum computing anyway, with a common sentiment among my colleagues being that it’s overhyped and due for a crash.
I’m also a little put off by lumping in quantum computing with quantum sensing and so on: Only quantum computing would have an actually transformative effect on anything if actually realised, with the others being just a slightly better way of doing things we can already do.
I’m highly skeptical about the risk of AI extinction, and highly skeptical that there will be singularity in our near-term future.
However, I am concerned about near-term harms from AI systems such as misinformation, plagiarism, enshittification, job loss, and climate costs.
How are you planning to appeal to people like me in your movement?
I see a contradiction in EA thinking on AI and politics. Common EA beliefs are that
AI will be a revolutionary technology that affects nearly every aspect of society.
Somehow, if we just say the right words, we can stop the issue of AI from becoming politically polarised.
I’m sorry to say, but EA really doesn’t have that much of a say on the matter. The AI boosters have chosen their side, and it’s on the political right. Which means that the home for anti-AI action will end up on the left, a natural fit for anti-big business, pro-regulation ideas. If EA doesn’t embrace this reality, probably some other left-wing anti-AI movement is going to pop up, and it’s going to leave you in the dust.