Reading this great thread on SBF’s bio it seems like his main problem was stimulants wrecking his brain. He was absurdly overconfident in everything he did, did not think things through, didn’t sleep, and admitted to being deficient in empathy (“I don’t have a soul”). Much has been written about deeper topics like naive utiliarianism and trust in response to SBF, but I wonder if the main problem might just be the drug culture that exists in certain parts of EA. Stimulants should be used with caution, and a guy like SBF probably should never have been using them, or at least nowhere near the amount he was getting.
Gil
RedStateBlueState’s Quick takes
I think the judgement calls used in coming up with moral weights have less to do with caring about animals and more to do with how much you think attributes like intelligence and self-awareness have to do with sentience. They’re applied to animals, but I think they’re really more neuroscience/philosophy intuitions. The people who have the strongest/most out-of-the-ordinary intuitions are MIRI folk, not animal lovers.
Yeah i guess that makes sense. But uh.… have other institutions actually made large efforts to preserve such info? Which institutions? Which info?
This might be a dumb question, but shouldn’t we be preserving more elementary resources to rebuild a flourishing society? Current EA is kind of only meaningful in a society with sufficient abundant resources to go into nonprofit work. It feels like there are bigger priorities in the case of sub-x-risk.
I don’t think points about timelines reflect an accurate model of how AI regulations and guardrails are actually developed. What we need is for Congress to pass a law ordering some department within the executive branch to regulate AI, eg by developing permitting requirements or creating guidelines for legal AI research or whatever. Once this is done, the specifics of how AI is regulated are mostly up to that executive branch, which can and will change over time.
Because of this, it is never “too soon” to order the regulation of AI. We may not know exactly what regulations would be like, but this is very unlikely to be written into law anyway. What we want right now is to create mechanisms to develop and enforce safety standards. Similar arguments apply to internal safety standards at companies developing AI capabilities.
It seems really hard for us to know exactly when AGI (or ASI or whatever you want to call it) is actually imminent. Even if it was possible, however, I just don’t think last-minute panicking about AGI would actually accomplish much. It’s all but impossible to quickly create societal consensus that the world is about to end before any harm has actually occurred. I feel like there’s an unrealistic image of “we will panic and then everyone will agree to immediately stop AI research” implicit in this post. The smart thing to do is to develop mechanisms early and then use these mechanisms when we get closer to crunch time.
I think most of the animal welfare neglect comes from the fact that if people are deep enough into EA to accept all of its “weird” premises they will donate to AI safety instead. Animal welfare is really this weird midway spot between “doesn’t rest on controversial claims” and “maximal impact”.
On “End high-skilled immigration programs”: The thing about big-brained stuff like this is it very rarely works. Consider:
What is p(doom|immigration restrictions)-p(doom|status quo immigration)? To that end: might immigration be useful in AI Safety research as well?
What is E[utility from AI doom]-E[utility from not AI doom]? This also probably gets into all sorts of infinite ethics/pascal’s mugging issues.
How likely are you to actually change immigration laws like this?
What is the non-AI-related utility of immigration, before AI doom or assuming AI doom never comes?
What other externalities might exist from trying to get involved in immigration politics?
After doing all these calculations you will almost assuredly get a value less than intervening in politics to tackle AI Safety a different way.
The other stuff seems more reasonable but if you’re going to restrict immigrants’ ability to work on AI you might as well restrict natives’ ability to work on AI as well. I doubt that the former is much easier than the latter.
Let me make the contrarian point here that you don’t have to build AGI to get these benefits eventually. An alternative, much safer approach would be to stop AGI entirely and try to inflate human/biological intelligence with drugs or other biotech. Stopping AGI is unlikely to happen and this biological route would take a lot longer but it’s worth bringing up in any argument about the risks vs. reward of AI.
I am nervous about wading into partisan politics with AI safety. I think there’s a chance that AI safety becomes super associated with one party due to a stunt like this, or worse becomes a laughing stock for both parties. Partisan politics is an incredibly adversarial environment, which I fear could undermine the currently unpolarized nature of AI safety.
Ooh, now this is interesting!
Running a candidate is one thing, actually getting coverage for this candidate is another. If we could get a candidate to actually make the debate stage in one of the parties that would be a big deal, but that would also be very hard. The one person who I can think who could actually get on the debate stage is Andrew Yang, if there ends up being a Democratic primary (which I am not at all sure about). If I recall he has actually talked about AI x-risk in the past? Even if that’s wrong, I know he has interacted with EA before, so it’s possible we could convince him to talk about it. He probably won’t make it his entire (or even main) platform though.
Without Andrew Yang on the debate stage, I’m not sure how much coverage we could really expect to get. I made a conscious effort not to pay attention to random non-debate candidates last election, so maybe others will have a better idea, but I think non-debate candidates got really low visibility. Still maybe more than nothing, but certainly not a big splash.
Ahh, I didn’t read it as you talking about the effects of Eliezer’s past outreach. I strongly buy “this time is different”, and not just because of the salience of AI in tech. The type of media coverage we’re getting is very different: the former CEO of Google advocating AI risk and a journalist asking about AI risk in the White House press briefing is just nothing like we’ve ever seen before. We’re reaching different audiences here. The AI landscape is also very different; AI risk arguments are a lot more convincing when we have a very good AI to point to (GPT-4) and when we have facts like “a majority of AI researchers think p(AI killing humanity)>10%”.
But even if you believe this time won’t be different, I think we need to think critically about which world we would rather live in:
the current one, where AI Capabilities research keeps humming along with what seems to be inadequate AI Safety research and nobody outside of EA is really paying attention to AI Safety. All we can do is hope that AI risk isn’t as plausible as Eliezer thinks and that Sam Altman is really careful.
One where there is another SOTA AI capabilities lab, maybe owned by the government, but AI is treated as a dangerous and scary technology that must be treated with care. We have more alignment research, the government keeps tabs on AI labs to make sure they’re not doing anything stupid and maybe adds red tape that slow them down, and AI capabilities researchers everywhere don’t do obviously stupid things.
Let’s even think about the history here. Early Eliezer advocating for AGI to prevent nanotech from killing all of humanity was probably bad. But I am unconvinced that Eliezer’s advocacy from afterwards up until 2015 or whatever was net-negative. My understanding is that though his work led to development of AI capabilities labs, there was nobody at the time working on alignment anyway. This reflex of “AI capabilities research bad” only holds if there is sufficient progress on ensuring AI safety in the meantime.
One last note, on “power”. Assuming Eliezer isn’t horribly wrong about things, the worlds in which we survive AI are those where AI is widely acknowledged as extremely powerful. We’re just not going to make it if policy-makers and/or tech people don’t understand what they are dealing with here. Maybe there are reasons to delay this understanding a few years—I personally strongly oppose this—but let’s be clear about this.
Not to be rude but this seems like a lot of worrying about nothing. “AI is powerful and uncontrollable and could kill all of humanity, like seriously” is not a complicated message. I’m actually quite scared if AI Safety people are hesitant to communicate because they think the misinterpretation will be as bad as you are saying here; this is a really strong assumption, an untested one at that, and the opportunity cost of not pursuing media coverage is enormous.
The primary purpose of media coverage is to introduce the problem, not to immediately push for the solution. I stated ways that different actors taking the problem more seriously would lead to progress; I’m not sure that a delay is actually the main impact. On this last point, note that (as I expected when it was first released) the main effect of the FLI letter is that a lot more people have heard of AI Safety and people who have heard of it are taking it more seriously (the latter based largely on Twitter observations), not that a delay is actually being considered.
I don’t actually know where you’re getting “these issues in communication...historically have led to a lot of x-risk” from. There was no large public discussion about nuclear weapons before initial use (and afterwards we settled into the most reasonable approach there was for preventing nuclear war, namely MAD) or gain-of-function research. The track record of “tell people about problems and they become more concerned about these problems”, on the other hand, is very good.
(also: premature??? really???)
Well, maybe to both parts; it’s a good sign, but a weak one. Also concerns about response bias, etc., especially since YouGov doesn’t specialize in polling these types of questions and there’s no “ground truth” here to compare to.
I would caution people against reading too much into this. If you poll people about a concept they know nothing about (“AI will cause the end of the human race”) you will always get answers that don’t reflect real belief. These answers are very easily swayed, they don’t cause people to take action like real beliefs would, they are not going to affect how people vote or which elites they trust, etc.
Keep Chasing AI Safety Press Coverage
Part of the motivation for this post is that I think AI Safety press is substantially different from EA press as a whole. AI safety is inherently a technical issue which means you don’t get this knee-jerk antagonism that happens when people’s ideology is being challenged (ie when you tell people they should be donating to your cause instead of theirs). So while I haven’t read the whole EA press post you linked to, I think parts of it probably apply less to AI.
Keep Making AI Safety News
With all due respect I think people are reading way too far into this, Eliezer was just talking about the enforcement mechanism for a treaty. Yes, treaties are sometimes (often? always?) backed up by force. Stating this explicitly seems dumb because it leads to posts like this, but let’s not make this bigger than it is.
Politics is really important, so thank you for recognizing that and adding to discussion about Pause.
But this post confuses me. You start by talking about how protests are stronger when they are centered on something people care about rather than simply policy advocacy. Which, I don’t know if I agree with, but it’s an argument that you can make. But then you shift toward advocating for regulation rather than pause. Which is also just policy advocacy, right? And I don’t understand why you’d expect it to have better politics than a pause. Your points about needing companies to prove they are safe is pretty much the same point that Holly Elmore has been making, and I don’t know why they apply better to regulation than a Pause.