I’m a computational physicist, I generally donate to global health. I am skeptical of AI x-risk and of big R Rationalism, and I intend explaining why in great detail.
titotal
The EA space in general has fairly weak defenses against ideas that sound persuasive but don’t actually hold up to detailed scrutiny. An initiative like this, if implemented correctly, seems like a step in the right direction.
I find it unusual that this end of year review contains barely any details of things you’ve actually done this year. Why should donors consider your organization as opposed to other AI risk orgs?
“It seems hard to predict whether superintelligence will kill everyone or not, but there’s a worryingly high chance it will, and Earth isn’t prepared,” and seems to think the latter framing is substantially driven by concerns about what can be said “in polite company.”
Funnily enough, I think this is true in the opposite direction. There is massive social pressure in EA spaces to take AI x-risk and the doomer arguments seriously. I don’t think it’s uncommon for someone who secretly suspects it’s all a load of nonsense to diplomatically say a statement like the above, in “polite EA company”.
Like you: I urge people who think AI x-risk is overblown to make their arguments loudly and repeatedly.
To be clear, Thorstadt has written around a hundred different articles critiquing EA positions in depth, including significant amounts of object level criticism.
I find it quite irritating that no matter how much in depth object level criticism people like Thorstadt or I make, if we dare to mention meta-level problems at all we often get treated like rabid social justice vigilantes. This is just mud-slinging: both meta level and object level issues are important for the epistemological health of the movement.
Pause AI seems to not be very good at what they are trying to do. For example, this abysmal press release which makes pause AI sound like tinfoil wearing nutjobs, which I already complained about it in the comments here.
I think they’ve been coasting for a while on the novelty of what they’re doing, which helps obscure that only like a dozen or so people are actually showing up to these protests, making them an empty threat. This is unlikely to change as long as the focus of these protests are based on the highly speculative threat of AI x-risk, which people do not viscerally feel as a threat and does not carry authoritative scientific backing compared to something like climate change. People might say they’re concerned about AI on surveys, but they aren’t going to actually hit the streets unless they think it’s meaningfully and imminently going to harm them.
In todays climate, the only way to build a respectably sized protest movement is to put x-risk on the backburner and focus on attacking AI more broadly: there are a lot of people who are pissed at gen-AI in general, like people mad at data plagiarism, job loss and enshittification. They are making some steps towards this, but I think there’s a feeling that doing so would end up aligning them politically with the left and make enemies among AI companies. They should either embrace this, or give up on protesting entirely.
I’m worried that a lot of these “questions” seem like you’re trying to push a belief, but phrasing it like a question in order to get out of actually providing evidence for said belief.
Why has Open Philanthropy decided not to invest in genetic engineering and reproductive technology, despite many notable figures (especially within the MIRI ecosystem) saying that this would be a good avenue to work in to improve the quality of AI safety research?
First, AI safety people here tend to think that super-AI is imminent within a decade or so, so none of this stuff would kick in time. Second, this stuff is a form of eugenics which has a fairly bad reputation, and raises thorny ethical issues even divorced from it’s traditional role in murder and genocide. Third, it’s all untested and based on questionable science and i suspect it wouldn’t actually work very well, if at all.
Has anyone considered possible perverse incentives that the aforementioned CEA Community Health team may experience, in that they may have incentives to exaggerate problems in the community to justify their own existence? If so, what makes CEA as a whole think that their continued existence is worth the cost?
Have you considered that the rest of EA is incentivised to pretend there aren’t problems in EA, for reputational reasons? If so, why shouldn’t community health be expanded instead of reduced?
This question is basically just a baseless accusation rephrased into a question in order to get away with it. I can’t think of a major scandal in EA that was first raised by the community health team.
Why have so few people, both within EA and within popular discourse more broadly, drawn parallels between the “TESCREAL” conspiracy theory and antisemitic conspiracy theories?
Because this is a dumb and baseless parallel? There’s a lot more to antisemitic conspiracy theories than “powerful people controlling things”. In fact, the general accusation used by Torres is to associate TESCREAL with white supremacist eugenicists, which feels kinda like the opposite end of the scale
Why aren’t there more organizations within EA that are trying to be extremely hardcore and totalizing, to the level of religious orders, the Navy SEALs, the Manhattan Project, or even a really intense start-up? It seems like that that is the kind of organization you would want to join, if you truly internalize the stakes here.
Because this is a terrible idea, and on multiple occasions has already led to harmful cult-like organisations. AI safety people have already spilled a lot of ink about why a maximising AI would be extremely dangerous, so why the hell would you want to do maximising yourself?
For as long as it’s existed the “AI safety” movement has been trying to convince people that superintelligent AGI is imminent and immensely powerful. You can’t act all shocked pikachu that some people would ignore the danger warnings and take that as a cue to build it before someone else does. This was all quite a predictable result of your actions.
I would like to humbly suggest that people not engage in active plots to destroy humanity based on their personal back of the envelope moral calculations.
I think that the other 8 billion of us might want a say, and I’d guess we’d not be particularly happy if we got collectively eviscerated because some random person made a math error.
On multiple occasions, I’ve found a “quantified” analysis to be indistinguishable from a “vibes-based” analysis: you’ve just assigned those vibes a number, often one basically pulled out of your behind. (I haven’t looked enough into shrimp to know if this is one of those cases).
I think it is entirely sensible to strongly prefer cause estimates that are backed by extremely strong evidence such as meta-reviews of randomised trials, rather than cause estimates based on vibes that are essentially made up. Part of the problem I have with naive expected value reasoning is that it seemingly does not take this entirely reasonable preference into account.
I have a PHD on computational quantum chemistry (ie, using conventional computers to simulate quantum systems). In my opinion quantum technologies are unlikely to be a worthy cause area. I have not researched everything in depth so I can only give my impressions here from conversations with colleagues in the area.
First, I think the idea of quantum computers having any effect on WMD’s in the near future seems dodgy to me. Even if practical quantum computers are built, they are still likely to be incredibly expensive for a long time to come. People seem unsure about how useful quantum algorithms will actually be for material science simulations. We can build approximations to compounds that run fine on classical computers, and even if quantum computers opens up more approximations, you’re still going to have to check in with real experiments. You are also operating in an idealised realm: you can model the compounds, yes, but if you want to investigate, say, it’s effect on humans, you need to model the human body as well, which is an entirely different beast.
The next point is that even if this does work in the future, why not put the money to investigate it then, rather than now, before it’s been proven to work? We will have a ton of advance warning if quantum computers can actually be used for practical purposes, because they will start off really bad and develop over time.
From what I’ve heard, theres a lot of skepticism about near-term quantum computing anyway, with a common sentiment among my colleagues being that it’s overhyped and due for a crash.
I’m also a little put off by lumping in quantum computing with quantum sensing and so on: Only quantum computing would have an actually transformative effect on anything if actually realised, with the others being just a slightly better way of doing things we can already do.
I’m highly skeptical about the risk of AI extinction, and highly skeptical that there will be singularity in our near-term future.
However, I am concerned about near-term harms from AI systems such as misinformation, plagiarism, enshittification, job loss, and climate costs.
How are you planning to appeal to people like me in your movement?
If we’re listing factors in EA leading to mental health problems, I feel like it’s worth pointing that a portion of EA thinks there’s a high chance of an imminent AI apocalypse that will kill everybody.
I myself don’t believe this at all, but to the people that do believe this, there’s no way it doesn’t affect your mental health.
This seems to me like an attempt to run away from the premise of the thought experiment. I’m seeing lot’s of “maybes” and “mights” here, but we can just explain them away with more stipulations: You’ve only seen the outside of their ship, you’re both wearing spacesuits that you can’t see into, you’ve done studies and found that neuron count and moral reasoning skills are mostly uncorrelated, and that spacefilight can be done with more or less neurons, etc.
None of these avert the main problem: The reasoning really is symmetrical, so both perspectives should be valid. The EV of saving the alien is 2N, where N is the human number of neurons, and the EV of saving the human from the alien perspective is 2P, where P is the is alien number of neurons. There is no way to declare one perspective the winner over the other, without knowing both N and P. Remember in the original two envelopes problem, you knew both the units, and the numerical value in your own envelope: this was not enough to avert the paradox.
See, the thing that’s confusing me here is that there are many solutions to the two envelope problem, but none of them say “switching actually is good”. They are all about how to explain why the EV reasoning is wrong and switching is actually bad. So in any EV problem which can be reduced to the two envelope problem, you shouldn’t switch. I don’t think this is confined to alien vs human things either: perhaps any situation where you are unsure about a conversion ratio might run into two envelopy problems, but I’ll have to think about it.
I think switching has to be wrong, for symmetry based reasons.
Let’s imagine you and a friend fly out on a spaceship, and run into an alien spaceship from an another civilisation that seems roughly as advanced as you. You and your buddy have just met the alien and their buddy but haven’t learnt each others languages, when an accident occurs: your buddy and their buddy go flying off in different directions and you collectively can only save one of them. The human is slightly closer and a rescue attempt is slightly more likely to be successful as a result: based solely on hedonic utilitarianism, do you save the alien instead?
We’ll make it even easier and say that our moral worth is strictly proportional to number of neurons in the brain, which is an actual, physical quantity.
I can imagine being an EA-style reasoner, and reasoning as follows: obviously I should anchor that the alien and humans have equal neuron counts, at level N. But obviously there’s a lot of uncertainty here. Let’s approximate a lognormal style system and say theres a 50% chance the alien is also level N, a 25% chance they have N/10 neurons, and a 25% chance they have 10N neurons. So the expected number of neurons in the alien is 0.25*N/10 + 0.5*N + 0.25*(10N) = 3.025N. Therefore, the alien is worth 3 times as much a human in expectation, so we should obviously save it over the human.
Meanwhile, by pure happenstance, the alien is also a hedonic EA-style reasoner with the same assumptions, with neuron count P. They also do the calculation, and reason that the human is worth 3.025P, so we should save the human.
Clearly, this reasoning is wrong. The cases of the alien and human are entirely symmetric: both should realise this and rate each other equally, and just save whoevers closer.
If your reasoning gives the wrong answer when you scale it up to aliens, it’s probably also giving the wrong answer for chickens and elephants.
If we make reasoning about chickens that is correct, it should also be able to scale up to aliens without causing problems. If your framework doesn’t work for aliens, that’s an indication that something is wrong with it.
Chickens don’t hold a human-favouring position because they are not hedonic utilitarians, and aren’t intelligent enough to grasp the concept. But your framework explicitly does not weight the worth of beings by their intelligence, only their capacity to feel pain.
I think it’s simply wrong to switch in the case of the human vs alien tradeoff, because of the inherent symmetry of the situation. And if it’s wrong in that case, what is it about the elephant case that has changed?
So in the two elephants problem, by pinning to humans are you affirming that switching from the 1 human EV to 1 elephant EV, when you are unsure about the HEV to EEV conversion, actually is the correct thing to do?
Like, option 1 is 0.25 HEV better than option 2, but option 2 is 0.25 EEV better than option 1, but you should pick option 1?
what if instead of an elephant, we were talking about a sentient alien? Wouldn’t they respond to this with an objection like “hey, why are you picking the HEV as the basis, you human-centric chauvinist?”
Maybe it’s worth pointing out that Bostrom, Sandberg, and Yudkowsky were all in the same extropian listserv together (the one from the infamous racist email), and have been collaborating with each other for decades. So maybe it’s not precisely a geographic distinction, but there is a very tiny cultural one.
A couple of astronauts hanging out on a dome on mars is not the same thing as an interplanetary civilization. I expect mars landings to follow the same trajectory as the moon landings: put a few people on there for the sake of showing off, then not bother about it for half a century, then half-assedly discuss putting people on there long term, again for the sake of showing off.
I recommend the book A city on mars for an explanation of the massive social and economic barriers to space colonisation.
I hope you don’t take this the wrong way, but this press release is badly written, and it will hurt your cause.
I know you say you’re talking about more than extinction risks, but when you put: “The probability of AGI causing human extinction is greater than 99%” in bold and red highlight, that’s all anyone will see. And then they can go on to check what experts think, and notice that only a fringe minority, even among those concerned with AI risk, believe that figure.
By declaring your own opinion as the truth, over that of experts, you come off like an easily dismissible crank. One of the advantages of the climate protest movements is that they have a wealth of scientific work to point to for credibility. I’m glad you are pointing out current day harms later on in the article, but by then it’s too late and everyone will have written you off.
In general, there are too many exclamation points! It comes off as weird and offputting! and RANDOMLY BREAKING INTO ALLCAPS makes you look like you’re arguing on an internet forum. And there’s way too long paragraphs full of confusing phrases that are not understandable by a layperson.
I suggest you find some people who have absolutely zero exposure to AI safety or EA at all, and run these and future documents by them for ideas on improvements.
The link you posted does not support your claim. The 24 authors of the linked paper contains some top AI researchers like Geoffrey Hinton and Stuart Russell, but it obviously does not contain all of them, and is obviously not a representative sample. It also contains people with limited expertise in the subject, including a psychologist and a medieval historian.
In regards to your overall point, it does not rebuts the idea that some people have been cynically exploiting AI fears for their own gain. I mean, remember that OpenAI was founded as an AI safety organisation. The actions of Sam Altman seem entirely consistent with someone hyping X-risk in order to get funding and support for OpenAI, then pivoting to downplaying risk as soon as ditching safety gets more profit. I doubt this applies to all people or even the majority, but it does seem like it’s happened at least once.