I’m not saying it’s impossible to make sense of the idea of a metric of “how conscious” something is, just that it’s unclear enough what this means that any claim employing the notion without explanation is not “commonsense”.
Dr. David Mathers
‘There’s a common sense story of: more neurons → more compute power → more consciousness.’
I think it is very unclear what “more consciousness” even means. “Consciousness” isn’t “stuff” like water that you can have a greater weight or volume of.
Why did you unendorse?
It’s hard to see how the backlash could actually destroy GiveWell or stop Moskowitz and Tuna giving away their money through Open Phil/something that resembles Open Phil. That’s a lot of EA right there.
Good comment, but Drexler actually strikes me as both more moderate and more interesting on AI than just “same as Yudkowsky”. He thinks really intelligent AIs probably won’t be agents with goals at all (at least the first ones we build), and that this means that takeover worries of the Bostrom/Yudkowsky kind are overrated. It’s true that he doesn’t think the risks are zero, but if you look at the section titles of his FHI report, a lot of it is actually devoted to debunking various claims Bostrom/Yudkowksy make in support of the view that takeover risk is high: https://www.fhi.ox.ac.uk/wp-content/uploads/Reframing_Superintelligence_FHI-TR-2019-1.1-1.pdf
I don’t think this effects the point your making, it just seemed a bit unfair on Drexler if I didn’t mention this.
‘The way to actually fix this would be actual solidarity and bridge building on dealing with the short-term harms of AI.’
What would this look like? I feel like, if all you do is say nice things, that is a good idea usually, but it won’t move the dial that much (and also is potentially lying, depending on context and your own opinions; we can’t just assume all concerns about short-term harm, let alone proposed solutions, are well thought out). But if you’re advocating spending actual EA money and labour on this, surely you’d first need to make a case that stuff “dealing with the short term harms of AI” is not just good (plausible), but also better than spending the money on other EA stuff. I feel like a hidden crux here might be that you, personally, don’t believe in AI X-risk*, so you think it’s an improvement if AI-related money is spent on short term stuff, whether or not that is better than spending it on animal welfare or global health and development, or for that matter anti-racist/feminist/socialist stuff not to do with AI. But obviously, people who do buy that AI X-risk is comparable/better as a cause area than standard near-term EA stuff or biorisk, can’t take that line.
*I am also fairly skeptical it is a good use of EA money and effort for what it’s worth, though I’ve ended up working on it anyway.
Thorstad is mostly writing about X-risk from bioterror. That’s slightly different from biorisk as a broader category. I suspect Thorstad is also skeptical about the latter, but that is not what the blogposts are mostly focused on. It could be that frontier AI models will make bioterror easier and this could kill a large number of people in a bad pandemic, even if X-risk from bioterror remains tiny.
Yeah, I agree the cases seem very different.
’There’s no life bad enough for us to try to actively extinguish it when the subject itself can’t express a will for that’
I agree something seems very bad intuitively about trying to reduce the numbers of wild animals via killing them, but this seems too strong to me. What about a case where a pet dog is d in terrible pain, but will live a few more weeks? Most people seem to regard it as better for the dog to have it painlessly killed at that point. I guess that could be wrong, but I am skeptical. (I agree that human lives specifically can be net positive for their subjects overall despite featuring strongly more pain than pleasure, but I feel like that might depend precisely on the fact that humans can form thoughts like “I am glad to be alive” in such circumstances.)
‘No. AGI is different. It will have it’s own goals and agency.’ Only if we choose to build it that way: https://www.fhi.ox.ac.uk/wp-content/uploads/Reframing_Superintelligence_FHI-TR-2019-1.1-1.pdf (Though Bengio was correct when he pointed out that even if lots of people build safer tools, that doesn’t stop a more reckless person building an agent instead.)
’There are very few people that we have consistently seen publicly call for a stop to AGI progress. The clearest ones are Eliezer’s “Shut it All Down” and Nate’s “Fucking stop”.
The loudest silence is from Paul Christiano, whose RSPs are being used to safety-wash scaling.’
I’m not blaming the authors for this, as they couldn’t know, but literally today, on this forum, Paul Christiano has publicly expressed clear beliefs about whether a pause would be a good idea, and why he’s not advocating for one directly: https://forum.effectivealtruism.org/posts/cKW4db8u2uFEAHewg/thoughts-on-responsible-scaling-policies-and-regulation
Christiano: “If the world were unified around the priority of minimizing global catastrophic risk, I think that we could reduce risk significantly further by implementing a global, long-lasting, and effectively enforced pause on frontier AI development—including a moratorium on the development and production of some types of computing hardware. The world is not unified around this goal; this policy would come with other significant costs and currently seems unlikely to be implemented without much clearer evidence of serious risk.A unilateral pause on large AI training runs in the West, without a pause on new computing hardware, would have more ambiguous impacts on global catastrophic risk. The primary negative effects on risk are leading to faster catch-up growth in a later period with more hardware and driving AI development into laxer jurisdictions.
However, if governments shared my perspective on risk then I think they should already be implementing domestic policies that will often lead to temporary pauses or slowdowns in practice. For example, they might require frontier AI developers to implement additional protective measures before training larger models than those that exist today, and some of those protective measures may take a fairly long time (such as major improvements in risk evaluations or information security). Or governments might aim to limit the rate at which effective training compute of frontier models grows, in order to provide a smoother ramp for society to adapt to AI and to limit the risk of surprises.”
Maybe there’s just nothing interesting to say (though I doubt it), but I really feel like this should be getting more attention. It’s an (at least mostly, plausible some of the supers were EAs) outside check on the views of most big EA orgs about the single best thing to spend EA resources on.
I think that the evidence you cite for “careening towards Venezuela” being a significant risk comes nowhere near to showing that, and that as someone with a lot of sway in the community you’re being epistemically irresponsible in suggesting otherwise.
Of the links you cite as evidence:
The first is about the rate of advance slowing, which is not a collapse or regression scenario. At most it could contribute to such a scenario if we had reason to think one was otherwise likely.
The second is describing an all-ready existing phenomenon of cost disease which while concerning has been compatible with high rates of growth and progress over the past 200 years.
The third is just a blog post about how some definitions of “democratic” are theoretically totalitarian in principle, and contains 0 argument (even bad) that totalitarianism risk is high, or rising, or will become high.
The fourth is mostly just a piece that takes for granted that some powerful American liberals and some fraction of American liberals like to shut down dissenting opinion, and then discusses inconclusively how much this will continue and what can be done about it. But this seem obviously insufficient to cause the collapse of society, given that, as you admit, periods of liberalism where you could mostly say what you like without being cancelled have been the exception not the rule over the past 200 years, and yet growth and progress have occurred. Not to mention that they have also occurred in places like the Soviet Union, or China from the early 1980s onward, that have been pretty intolerant of ideological dissent.
The fifth is a highly abstract and inconclusive discussion of the possibility that having a bunch of governments that grow/shrink in power as their policies are successful/unsuccessful, might produce better policies than an (assumed) status quo where this doesn’t happen*, combined with a discussion of the connection of this idea to an obscure far-right wing Bay Area movement of at most a few thousand people. It doesn’t actually argue for the idea that dangerous popular ideas will eventually cause civilization regression at all; it’s mostly about what would follow if popular ideas tended to be bad in some general sense, and you could get better ideas by having a “free market for governments” where only successful govs survived.
The last link on dysgenics and fertility collapse largely consist of you arguing that these are not as threatening as some people believe(!). In particular, you argue that world population will still be slightly growing by 2100 and it’s just really hard to project current trends beyond then. And you argue that dysgenic trends are real but will only cause a very small reduction in average IQ, even absent a further Flynn effect (and “absent a further Flynn effect” strikes me as unlikely if we are talking about world IQ, and not US.) Nowhere does it argue these things will be bad enough to send progress into reverse.
This is an incredibly slender basis to be worrying about the idea that the general trend towards growth and progress of the last 200 years will reverse absent one particular transformative technology.
*It plausibly does happen to some degree. The US won the Cold War partly because it had better economic policies than the Soviet Union.- 25 Oct 2023 10:47 UTC; 2 points) 's comment on Pause For Thought: The AI Pause Debate by (
Thanks for clarifying.
Thanks for being kind. I regret commenting at all to be honest.
EDIT: That is, I regret commenting because I actually agree that it is more important people attend to the issues raised by the post than that they worry about the one paragraph that was bothering me.
I am not referring to the attackers mentioned in the post when I say “we” there, but to people with autism as a whole, when speculating about why we might receive a higher rate of bullying and hostility across society as a whole.
‘A wealthy social class of a particular type of neurodivergence dominates the culture. Many people formally or informally identify on the autism spectrum. Openness to new experiences is high, and mind-altering drugs are common and popular. Akin to royal courts of the past, exclusive events are where deals are made and new companies are founded. It creates an environment that can be extremely fun and stimulating, but also dangerous and unaccountable. With drugs, parties, overflowing testosterone, a lack of communication skills, and blindness to social cues, consent violations happen easily and frequently.’
As a man with an Asperger’s Diagnosis from childhood, who is quite visibly autistic*, and not in tech, I feel kind of in two minds about whether I object to this passage.
On the one hand, sadly, I do suspect that it is in fact true that men with autism are more likely to commit sexual assault**. And I generally lean on the side of people saying stuff that is true and relevant, even if it is a bit non-PC, so it’d be kind of hypocritical for me to condemn the authors outright.
On the other hand, this is, effectively, an attack on a minority that already (in most contexts) faces a lot of bullying ( https://www.tandfonline.com/doi/full/10.1080/13603116.2014.981602) and hostility. (I suspect some of the hostility may in some sense be a reasonable response to how we behave but I doubt all of it is.) I also suspect we face higher rates of workplace discrimination, though from quick googling I wasn’t able to find a high-quality looking study measuring this directly, as opposed to a lot of poor-quality looking papers asserting this was a known fact and then measuring a different but related thing. (Again, I admit that the line between “discrimination” and “people reasonably reacting to autistic behaving badly” can get blurry, but I doubt this accounts for all discrimination.) And it’s an attack being presented in passing without any supporting statistical evidence being cited whatsoever. Imagine if you experienced things like articles coming out in scientific journals about how everyone instantly dislikes you and there’s nothing you can do about it (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC5286449/), or read things that made you realize you (probably) weren’t just being paranoid when you thought people automatically narrowed circles you were standing in to exclude you at social and professional events https://leiterreports.typepad.com/blog/2020/10/on-being-a-philosopher-with-autism.html. And then suddenly you had people planting the idea you were more likely to be rapists on the public forum of the social movement you are involved in. It is quite distressing.
I don’t know if I think the authors should have left it out. But I’d at least like people to discuss this sensitively, and remember that for me, the quoted passage is not just a bit of sociological scene-setting before we move on to the important stuff. It’s more like a reminder of the reasons why in grad school I told one of the people writing a reference for me “no, don’t say that my stimming doesn’t get in the way of my teaching, because I’m scared that if people hear I’m on the spectrum, they’ll assume I’m a high sexual harassment risk”.
*I.e. I stim (https://www.autism.org.uk/advice-and-guidance/topics/behaviour/stimming) visibly enough that drunks in bars have assumed I’m learning disabled and in need of a minder, I’ve been verbally mocked in the street for my body language, and also in one case a bunch of German lads on a night out in a bar in Kreuzberg came over to mock me for rocking back and forth and got really quite menacing.
**I’m less confident this extends to the kind of deliberate, planned predation this post discusses elsewhere, as opposed to simply ignoring boundaries in the moment.
Do the post :)
Yeah, I think I agree that going really hard to increase fertility would likely require bad authoritarianism, even beyond the authoritarianism arguably inherent in trying to do this. (Or at least, I weekly guess that there is an >50% chance of this.) I was probably mostly being pedantic.
Also part (although not all) of the attraction of “more neurons=more consciousness” is I think a picture that comes from “more input=more of a physical stuff”, which is wrong in this case. I actually do (tentatively!) think that consciousness is sort of a cluster-y concept, where the more of a range of properties a mind has, the more true* it is to say it is conscious, but none of those properties definitively is “really” what being conscious requires. (i.e. sensory input into rational belief, ability to recognize your own sensory states, some sort of raw complexity requirement to rule out very simple systems with the previous 2 features etc.) And I think larger neuron counts will rough correlate with having more of these sorts of properties. But I doubt this will lead to a view where something with a trillion neurons is a thousand times more conscious than something with a billion.
*Degrees of truth are also highly philosophically controversial though.