This just seems like another annoying spam / marketing email. I basically never want any unnecessary emails from any company ever.
Joseph Miller
EA co-working spaces are the most impactful EA infrastructure that I’m aware of. And they are mostly underfunded.
This is particularly relevant given the recent letter from Anthropic on SB-1047.
I would like to see a steelman of the letter since it appears to me to significantly undermine Anthropic’s entire raison d’etre (which I understood to be: “have a seat at the table by being one of the big players—use this power to advocate for safer AI policies”). And I haven’t yet heard anyone in the AI Safety community defending it.
Anthropic are pushing for two key changes
not to be accountable for “pre-harm” enforcement of AI Safety standards (ie. wait for a catastrophe before enforcing any liability).
“if a catastrophic event does occur … the quality of the company’s SSP should be a factor in determining whether the developer exercised ‘reasonable care.’”. (ie. if your safety protocols look good, you can be let off the hook for the consequences of catastrophe).
Also significantly weakening whistleblower protections.
Ok thanks, I didn’t know that.
Nit: Beff Jezos was doxxed and repeating him name seems uncool, even if you don’t like him.
proximity [...] is obviously not morally important
People often claim that you have a greater obligation to those in your own country than to foreigners. I’m doubtful of this
imagining drowning children that there are a bunch of nearby assholes ignoring the child as he drowns. Does that eliminate your reason to save the child? No, obviously not
Your argument seems to be roughly an appeal to the intuition that moral principles should be simple—consistent across space and time, without weird edge cases, not specific to the circumstances of the event. But why should they be?Imo this is the mistake that people make when they haven’t internalized reductionism and naturalism. In other words they are moral realist or otherwise confused. When you realize that “morality” is just “preferences” with a bunch of pointless religious, mystical and philosophical baggage, the situation becomes clearer.
Because preferences are properties of human brains, not physical laws there is no particular reason to expect them to have low Kolmogorov complexity. And to say that you “should” actually be consistent about moral principles is an empty assertion that entirely rests on a hazy and unnatural definition of “should”.
Nonetheless, the piece exhibited some patterns that gave me a pretty strong allergic reaction. It made or implied claims like:
* a small circle of the smartest people believe this
* i will give you a view into this small elite group who are the only who are situationally aware
* the inner circle longed tsmc way before you
* if you believe me; you can get 100x richer—there’s still alpha, you can still be early
* This geopolitical outcome is “inevitable” (sic!)
* in the future the coolest and most elite group will work on The Project. “see you in the desert” (sic)
* Etc.These are not just vibes—they are all empirical claims (except the last maybe). If you think they are wrong, you should say so and explain why. It’s not epistemically poor to say these things if they’re actually true.
I also claim that I understand ethics.
“Good”, “bad”, “right”, “wrong”, etc. are words that people project their confusions about preferences / guilt / religion on to. They do not have commonly agreed upon definitions. When you define the words precisely the questions become scientific, not philosophical.
People are looking for some way to capture their intuitions that God above is casting judgement about the true value of things—without invoking supernatural ideas. But they cannot, because nothing in the world actually captures the spirit of this intuition (the closest thing is preferences). So they relapse into confusion, instead of accepting the obvious conclusion that moral beliefs are in the same ontological category as opinions (like “my favorite color is red”), not facts (like “the sky appears blue”).
I expect much of this will be largely subjective and have no objective fact of the matter, but it can be better informed by both empirical and philsophical research.
So I would say it is all subjective. But I agree that understanding algorithms will help us choose which actions satisfy our preferences. (But not that searching for explanations of the magic of conscious will help us decide which actions are good.)
I claim that I understand sentience. Sentience is just a word that people have projected their confusions about brains / identity onto.
Put less snarkily:
Consciousness does not have a commonly agreed upon definition. The question of whether an AI is conscious cannot be answered until you choose a precise definition of consciousness, at which point the question falls out of the realm of philosophy into standard science.This might seem like mere pedantry or missing the point, because the whole challenge is to figure out the definition of consciousness, but I think it is actually the central issue. People are grasping for some solution to the “hard problem” of capturing the je ne sais quoi of what it is like to be a thing, but they will not succeed until they deconfuse themselves about the intangible nature of sentience.
You cannot know about something unless it is somehow connected the causal chain that led to the current state of your brain. If we know about a thing called “consciousness” then it is part of this causal chain. Therefore “consciousness”, whatever it is, is a part of physics. There is no evidence for, and there cannot ever be evidence for, any kind of dualism or epiphenomenal consciousness. This leaves us to conclude that either panpsychism or materialism is correct. And causally-connected panpsychism is just materialism where we haven’t discovered all the laws of physics yet. This is basically the argument for illusionism.
So “consciousness” is the algorithm that causes brains to say “I think therefore I am”. Is there some secret sauce that makes this algorithm special and different from all currently known algorithms, such that if we understood it we would suddenly feel enlightened? I doubt it. I expect we will just find a big pile of heuristics and optimization procedures that are fundamentally familiar to computer science. Maybe you disagree, that’s fine! But let’s just be clear that that is what we’re looking for, not some other magisterium.
Sentient AI that genuinely ‘feels for us’ probably wouldn’t disempower us
Making it genuinely “feel for us” is not well defined. There are some algorithms that make it optimize for our safety. Some of these will be vaguely similar to the algorithm in human brains that we call empathy, some will not. It does not particularly matter for alignment either way.
and basically nobody there (as far as I could tell) held extremely ‘doomer’ beliefs about AI.
In any case, I think it’s clear that AI Safety is no longer ‘neglected’ within EA, and possibly outside of it.
I think this is basically entirely selection effects. Almost all the people I spoke to were “doomers” to some extent.
Variable value principles seems very weird and unlikely
Person-affecting theories. I find them unlikely
Rejections of transitivity. This seems very radical to me, and therefore unlikely
I assume you, like most EAs, are not a moral realist. In which case what do these statements mean? This seems like an instance of a common pattern in EA where people talk about morality in a religious manner, while denying having any mystical beliefs.
This is not a research problem, it’s a coordination / political problem. The algorithms are already doing what the creators intended, which it to maximise engagement.
You should also consider impact of changing the diets of millions of children. Will this food be healthier? Will they like the food?
Yup.
the small movement that PauseAI builds now will be the foundation which bootstraps this larger movement in the future
Is one of the main points of my post. If you support PauseAI today you may unleash a force which you cannot control tomorrow.
I agree this is slightly hyperbolic. If you include the disappearance of Ilya Sutskever, there’s three. And I know of two more less widely reported. Depending on how narrow your definition of a “safety-focused researcher” is, five people leaving in less than 6 months is fairly significant.
Thanks, Rudolf, I think this is a very important point, and probably the best argument against PauseAI. It’s true in general that The Ends Do Not Justify the Means (Among Humans).
My primary response is that you are falling for status-quo bias. Yes this path might be risky, but the default path is more risky. My perception is the current governance of AI is on track to let us run some terrible gambles with the fate of humanity.
Consider environmentalism. It seems quite uncertain whether the environmentalist movement has been net positive (!).
We can play reference class tennis all day but I can counter with the example of the Abolitionists, the Suffragettes, the Civil Rights movement, Gay Pride or the American XL Bully.
It seems to me that people overstate the track record of populist activism at solving complicated problems
...
the science is fairly straightforward, environmentalism is clearly necessary, and the movement has had huge winsAs I argue in the post, I think this is an easier problem than climate change. Just as most people don’t need a detailed understanding of the greenhouse effect, most people don’t need a detailed understanding of the alignment problem (“creating something smarter than yourself is dangerous”).
The advantage with AI is that there is a simple solution that doesn’t require anyone to make big sacrifices, unlike with climate change. With PauseAI, the policy proposal is right there in the name, so it is harder to become distorted than vaguer goals of “environmental justice”.
fighting Moloch rather than sacrificing our epistemics to him for +30% social clout
I think to a significant extent it is possible for PauseAI leadership to remain honest while still having broad appeal. Most people are fine if you say that “I in particular care mostly about x-risk, but I would like to form a coalition with artists who have lost work to AI.”
There is a spirit here, of truth-seeking and liberalism and building things, of fighting Moloch rather than sacrificing our epistemics to him for +30% social clout. I admit that this is partly an aesthetic preference on my part. But I do believe in it strongly.
I’m less certain about this but I think the evidence is much less strong than rationalists would like to believe. Consider: why has no successful political campaign ever run on actually good, nuanced policy arguments? Why do advertising campaigns not make rational arguments for why should prefer their product, instead appealing to your emotions? Why did it take until 2010 for people to have the idea of actually trying to figure out which charities are effective? The evidence is overwhelming that emotional appeals are the only way to persuade large numbers of people.
If we make the conversation about AIS more thoughtful, reasonable, and rational, it increases the chances that the right thing (whatever that ends up being—I think we should have a lot of intellectual humility here!) ends up winning.
Again, this seems like it would be good, but the evidence is mixed. People were making thoughtful arguments for why pandemics are a big risk long before Covid, but the world’s institutions were sufficiently irrational that they failed to actually do anything. If there had been an emotional, epistemically questionable mass movement calling for pandemic preparedness, that would have probably been very helpful.
Most economists seem to agree that European monetary policy is pretty bad and significantly harms Europe, but our civilization is too inadequate to fix the problem. Many people make great arguments about why aging sucks and it should really be a top priority to fix, but it’s left to Silicon Valley to actually do something. Similarly for shipping policy, human challenge trials and starting school later. There is long list of preventable, disastrous policies which society has failed to fix due lack of political will, not lack of sensible arguments.
The main message of this post is that current PauseAI protest’s primary purpose is to build momentum for a later point.
This post is just my view. As with Effective Altruism, PauseAI does not have a homogenous point of view or a specific required set of beliefs to participate. I expect that the main organizers of PauseAI agree that GPT-5 is very unlikely to end the world. Whether they think it poses an acceptable risk, I’m not sure.
Notably, I doubt we’ll discover the difference between GPT4 and superhuman to be small and I doubt GPT5 will be extremely good at interpretability.
I also doubt it, but I am not 1 in 10,000 confident.
I’m confused why the comments aren’t more about cause prioritization as that’s the primary choice here. Maybe that’s too big of a discussion for this comment section.