I direct the AI:Futures and Responsibility Programme (https://www.ai-far.org/) at the University of Cambridge, which works on AI strategy, safety and governance. I also work on global catastrophic risks with the Centre for the Study of Existential Risk and AI strategy/policy with the Centre for the Future of Intelligence.
Sean_o_h
Uh, the word in that screenshot is “meditating”. She was asking people to not talk too loudly while she was meditating.
I would strongly caution against doing so. Even if it turns out to be seemingly justified in this instance (and I offer no view either way whether it is or not), I cannot think of a more effective way of discouraging victims/whistleblowers from coming forward (in other cases in this community) in future situations.
Sutskever appears to have regrets:
https://twitter.com/ilyasut/status/1726590052392956028
This is both a very kind and a very helpful thing to offer. This is something that can help people an awful lot in terms of their career.
Good to know, thank you.
Yeah, unfortunately I suspect that “he claimed to be an altruist doing good! As part of this weird framework/community!” is going to be substantial part of what makes this an interesting story for writers/media, and what makes it more interesting than “he was doing criminal things in crypto” (which I suspect is just not that interesting on its own at this point, even at such a large scale).
Thank you for all your work, and I’m excited for your ongoing and future projects Will, they sound very valuable! But I hope and trust you will be giving equal attention to your well-being in the near-term. These challenges will need your skills, thoughtfulness and compassion for decades to come. Thank you for being so frank—I know you won’t be alone in having found this last year challenging mental health-wise, and it can help to hear others be open about it.
Stated more eloquently than I could have, SYA.
I’d also add that, were I to be offering advice to K & E, I’d probably advise taking more time. Reacting aggressively or defensively is all too human when facing the hurricane of a community’s public opinion—and that is probably not in anyone’s best interest. Taking the time to sit with the issues, and later respond more reflectively as you describe, seems advisable.
Balanced against that, whatever you think about the events described, this is likely to have been a very difficult experience to go through in such a public way from their perspective—one of them described it in this thread as “the worst thing to ever happen to me”. That may have affected their ability to respond promptly.
+1; except that I would say we should expect to see more, and more high-profile.
AI xrisk is now moving from “weird idea that some academics and oddballs buy into” to “topic which is influencing and motivating significant policy interventions”, including on things that will meaningfully matter to people/groups/companies if put into action (e.g. licensing, potential restriction of open-sourcing, external oversight bodies, compute monitoring etc).
The former, for a lot of people (e.g. folks in AI/CS who didn’t ‘buy’ xrisk) was a minor annoyance. The latter is something that will concern them—either because they see the specific interventions as a risk to their work, or because they feel policy is being influenced in a major way by people who are misguided.
I would think it’s reasonable to anticipate more of this.
Sure, I agree with that. I also have parallel conversations with AI ethics colleagues—you’re never going to be able to make much headway with a few of the most hardcore safety people that your justice/bias etc work is anything but a trivial waste of time; anyone sane is working on averting the coming doom.
Don’t need to convince everyone; and there will always be some background of articles like this. But it’ll be a lot better if there’s a core of cooperative work too, on the things that benefit from cooperation.
My favourite recent example of (2) is this paper:
https://arxiv.org/pdf/2302.10329.pdfOther examples might include my coauthored papers with Stephen Cave (ethics/justice), e.g.
https://dl.acm.org/doi/10.1145/3278721.3278780
Another would be Haydn Belfield’s new collaboration with Kerry McInerney
http://lcfi.ac.uk/projects/ai-futures-and-responsibility/global-politics-ai/Jess Whittlestone’s online engagements with Seth Lazar have been pretty productive, I thought.
Some are hostile but not all, and there are disagreements and divisions just as deep if not deeper in AI ethics as there are in EA or any other broad community with multiple important aims that you can think of.
External oversight over the power of big tech is a good goal to help accomplish. This is from one of the leading AI ethics orgs; it could almost as easily have come from an org like GovAI:
https://ainowinstitute.org/publication/gpai-is-high-risk-should-not-be-excluded-from-eu-ai-act
I’ve heard versions of the claim multiple times, including from people i’d expect to know better, so having the survey data to back it up might be helpful even if we’re confident we know. the answer.
>”Despite the potential risks, EAs broadly believe super-intelligent AI should be pursued at all costs.” This is inaccurate imo.
Could we get a survey on a few versions of this question? I think it’s actually super-rare in EA.
e.g.
“i believe super-intelligent AI should be pursued at all costs”
“I believe the benefits outweigh the risks of pursuing superintelligent AI”
“I believe if risk of doom can be agreed to be <0.2, then the benefits of AI outweight the risks”
“I believe even if misalignment risk can be reduced to near 0, pursuing superintelligence is undesirable”
Wout Schellart, Jose Hernandez-Orallo, and Lexin Zhou have started an AI evaluation digest, which includes relevant benchmark papers etc. It’s pretty brief, but they’re looking for more contributers, so if you want to join in and help make it more comprehensive/contextualised, you should reach out!
https://groups.google.com/g/ai-eval/c/YBLo0fTLvUk
Less directly relevant, but Harry Law also has a new newsletter in the Jack Clark style, but more focused on governance/history/lessons for AI:
https://learningfromexamples.substack.com/p/the-week-in-examples-3-2-september
John’s comment points to another interesting tension.
CSER was indeed intended to be pluralistic and to provide space for heterodox approaches. And the general ‘vibe’ John gestures towards (I take it he’s not intending to be fully literal here—please correct me if I’m misinterpreting, John) is certainly more present at CSER than at other Xrisk orgs. It is also a vibe that is regularly rejected as a majority position in internal CSER full-group discussions. However, some groups and lineages are much more activist and evangelical in their approach than others. Hence they crowd out other heterodoxies and create an outsized external footprint, which can further make it difficult for other heterodoxies to thrive (whether in a centre or community). The CSER-type heterodoxy John and (I suspect) much of EA is familiar with is one that much of CSER is indifferent to, or disagrees with to various degrees. Other heterodoxies are… quieter.
In creating a pluralistic ERS, some diversities (as discussed by others) will be excluded from the get go (perhaps for good reasons, I do not offer comment on this). Of those included/tolerated, some will be far better-equipped with the tools to assert themselves. Disagreements are good, but the field on which disagreements are debated is often not an even one. Figuring out how to navigate this would be one of the key challenges for the approach proposed, I would think.
Sharing a relevant blog post today by Harry Law on the limits to growth and predictions of doom, and lessons for AI governance, which cites this post.
Apologies that I still owe some replies to the discussion below, I’ve found it all really helpful (thank you!). I agree with those who say that it would be useful to have some deeper historical analysis of the impact of past ‘doomer’ predictions on credibility, which is clearly informative to the question of the weight we should assign to the ‘cry wolf’ concern.
https://www.harrylaw.co.uk/post/ai-governance-and-the-limits-to-growth
I think that (hinges on timelines) is right. Other than the first, I think most of my suggestions come at minimal cost to short-timelines-world, and will help with minimising friction/reputational hit in long-timelines world. Re: the first, not delivering the strongest (and least hedged) version of argument may weaken the message for short-timelines world. But I note that even within this community, there is wide uncertainty and disagreement re: timelines; very short timelines are far from consensus.
Thanks! Re:
1. I think this is plausible (though I’m unclear on whether you mean ‘we as AI risk research community’ or ‘we as humanity’ here)2. This bias definitely exists, but AI in the last year has cut through to broader society in a huge way (I keep overhearing conversations on chatgpt and other things in cafes, on trains, etc, admittedly in the cambridge/london area; suddenly random family members have takes etc. It’s showing up in my wife’s social media, and being written about by the political journalists she follows, where it never had before, etc). Ditto (although to a smaller extent) AI xrisk. EA/FTX didn’t cut through to anything at all like the same extent.
> Cotton-Barratt could have been thrown out without any possibility of discussion. I am reliability told this is the policy of some UK universities.
Depending on what ‘discussion’ means here, I’d be surprised. It would be illegal to fire someone without due process. Whether discussion would be public as in here is a different matter; there tends to be a push towards confidentiality.
For balance: I’ve been an advocate for victims in several similar cases in UK universities, at least one of which was considerably more severe than what i’ve seen described in this case. I’ve encountered intervention and pressure from senior academic/administrative figures to discourage formal complaints being submitted, resulting in zero consequences for the perpetrator, and the victims leaving their roles. I would expect this to be the outcome more often on average than the very strong reaction Nathan describes.