Thanks Sarah!
Garrison
Bengio and Hinton are the two most-cited researchers alive. Ilya Sutskever is the 3rd most cited AI researcher, and though he’s not on that paper, the superalignment intro blog post from OpenAI says this, “Currently, we don’t have a solution for steering or controlling a potentially superintelligent AI, and preventing it from going rogue.” LeCun is probably the top AI researcher who’s not worried about controlling a superintelligence (4th in total citations after Sutskever).
This is obviously a semantics disagreement, but I stand by the original claim. Note that I’m not saying that all the top AI researchers are worried about x-risk.
In regards to your overall point, it does not rebuts the idea that some people have been cynically exploiting AI fears for their own gain. I mean, remember that OpenAI was founded as an AI safety organisation. The actions of Sam Altman seem entirely consistent with someone hyping X-risk in order to get funding and support for OpenAI, then pivoting to downplaying risk as soon as ditching safety gets more profit. I doubt this applies to all people or even the majority, but it does seem like it’s happened at least once.
I largely agree with this and alluded to this possibility here:
If AI companies ever needed to rely on doomsday fears to lure investors and engineers, they definitely don’t anymore.
I might write a separate piece on the best evidence for the hype argument, which OpenAI I think has been the biggest winner of. My guess is that Altman actually did believe what he was saying about AI risk back in 2015. Superintelligence came out the year before, and it’s not a surprising view for him to have given what else we know about him.
I’d also guess that Altman and Elon are two of the people most associated with the x-risk story, which has been the biggest driver of skepticism about it.
There’s also been more recent evidence of him ditching x-risk fears now that it seems convenient. From a recent Fox News interview:
Interviewer: “A lot of people who don’t understand AI, and I would put myself in that category, have got a basic understanding, but they worry about AI becoming sentient, about it making autonomous decisions, about it telling humans you’re no longer in charge?”
Altman: “It doesn’t seem to me to be where things are heading…is it conscious or not will not be the right question, it will be how complex of a task can it do on its own?”
Interviewer: “What about when the tool gets smarter than we are? Or the tool decides to take over?”
Altman: “I think tools in many senses are already smarter than we are. I think that the internet is smarter than you or I, the internet knows a lot of things. In fact, society itself is vastly smarter and more capable than any one person. I think we’re already good at working with tools, institutions, structures, whatever you want to call it, that are vastly more capable than one person and as long as we have a reasonably level playing field where one person or one company has vastly more power than anybody else, I think we know how to deal with that.”
Hsu clarified his position on my thread here:
“Clarifications:
1. The mafia tendencies (careerist groups working together out of self-interest and not to advance science itself) are present in the West as well these days. In fact the term was first used in this way by Italian academics.
2. They’re not against big breakthroughs in PRC, esp. obvious ones. The bureaucracy bases promotions, raises, etc. on metrics like publications in top journals, cititations, … However there are very obvious wins that they will go after in a coordinated way—including AI, semiconductors, new energy tech, etc.
3. I could be described as a China hawk in that I’ve been pointing to a US-China competition as unavoidable for over a decade. But I think I have more realistic views about what is happening in PRC than most China hawks. I also try to focus on simple descriptive analysis rather than getting distracted by normative midwit stuff.
4. There is coordinated planning btw govt and industry in PRC to stay at the frontier in AI/AGI/ASI. They are less susceptible to “visionaries” (ie grifters) so you’ll find fewer doomers or singularitarians, etc. Certainly not in the top govt positions. The quiet confidence I mentioned extends to AI, not just semiconductors and other key technologies.”
Pasted from LW:
Hey Seth, appreciate the detailed engagement. I don’t think the 2017 report is the best way to understand what China’s intentions are WRT to AI, but there was nothing in the report to support Helberg’s claim to Reuters. I also cite multiple other sources discussing more recent developments (with the caveat in the piece that they should be taken with a grain of salt). I think the fact that this commission was not able to find evidence for the “China is racing to AGI” claim is actually pretty convincing evidence in itself. I’m very interested in better understanding China’s intentions here and plan to deep dive into it over the next few months, but I didn’t want to wait until I could exhaustively search for the evidence that the report should have offered while an extremely dangerous and unsupported narrative takes off.
I also really don’t get the error pushback. These really were less technical errors than basic factual errors and incoherent statements. They speak to a sloppiness that should affect how seriously the report should be taken. I’m not one to gatekeep ai expertise, but idt it’s too much to expect a congressional commission with a top recommendation to commence in a militaristic AI arms race to have SOMEONE read a draft who knows that chatgpt-3 isn’t a thing.
Thanks Camille! Glad you found it useful.
Yeah, I got some pushback on Twitter on this point. I now agree that it’s not a great analogy. My thinking was that we technically know how to build a quantum computer, but not one that is economically viable (which requires technical problems to be solved and for the thing to be scalable/not too expensive). Feels like a not all squares are rectangles, but all rectangles are squares thing. Like quantum computing ISN’T economically viable but that’s not the main problem with it right now.
Thanks so much Andy! Hope you enjoy :)
thank you!
“With Islamic terrorism, these involved mass surveillance and detention without trial.”
I think Islamist terrorism would be more accurate and less inflammatory.
BTW, this link (Buzan, Wæver and de Wilde, 1998) goes to a PaperPile citation that’s not publicly accessible.
I think building AI systems with some level of autonomy/agency would make them much more useful, provided they are still aligned with the interests of their users/creators. There’s already evidence that companies are moving in this direction based on the business case: https://jacobin.com/2024/01/can-humanity-survive-ai#:~:text=Further%2C%20academics%20and,is%20pretty%20good.%E2%80%9D
This isn’t exactly the same as self-interest, though. I think a better analogy for this might be human domestication of animals for agriculture. It’s not in the self-interest of a factory farmed chicken to be on a factory farm, but humans have power over which animals exist so we’ll make sure there are lots of animals who serve our interests. AI systems will be selected for to the extent they serve the interests of people making and buying them.
RE international development: competition between states undercut arguments for domestic safety regulations/practices. These are exacerbated by beliefs that international rivals will behave less safely/responsibly, but you don’t actually need to believe that to justify cutting corners domestically. If China or Russia built an AGI that was totally safe in the sense that it is aligned with its creators interests, that would be seen as a big threat by the US govt.
If you think that building AGI is extremely dangerous no matter who does it, then having more well-resourced players in the space increases the overall risk.
People can and should read whoever and whatever they want! But who a conference chooses to platform/invite reflects on the values of the conference organizers, and any funders and communities adjacent to that conference.
Ultimately, I think that almost all of us would agree that it would be bad for a group we’re associated with to platform/invite open Nazis. I.e. almost no one is an absolutist on this issue. If you agree, then you’re not in principle opposed to exlcuding people based on the content of their beliefs, so the question just becomes: where do you draw the line? (This is not a claim that anyone at Manifest actually qualifies as an open Nazi, more just a reductio to illustrate the point.)
Answering this question requires looking at the actual specifics: what views do people hold? Were those views legible to the event organizers? I fear that a lot of the discourse is getting bogged down in euphemism, abstraction, and appeals to “truth-seeking,” when the debate is actually: what kind of people and worldviews do we give status to and what effects does that have on related communities.
If you think that EA adjacent orgs/venues should platform open Nazis, as long as they use similar jargon, then I simply disagree with you, but at least you’re being consistent.
My mistake on the guardian US distinction but to call it a “small newspaper” is wildly off base, and for anyone interacting with the piece on social media, the distinction is not legible.
Candidly, I think you’re taking this topic too personally to reason clearly. I think any reasonable person evaluating the online discussion surrounding manifest would see it as “controversial.” Even if you completely excluded the guardian article, this post, Austin’s, and the deluge of comments would be enough to show that.
It’s also no longer feeling like a productive conversation and distracts from the object level questions.
What prominent left wing thinkers exhibited anti semitism recently?
It’s not just a matter of a speaker’s net effect on attendance/interest. Alex Jones would probably draw lots of new people to a Manifest conference, but are they types of people you want to be there? Who you choose to platform, especially at a small, young conference, will have a large effect on the makeup and culture of the related communities.
Additionally, given how toxic these views are in the wider culture, any association between them and prediction markets are likely to be bad for the long-term health of the prediction community.
I’d suggest link searching stories on Twitter to see what their general response is. My Twitter feed was also full of people picking the story apart, but that’s clearly more a reflection of who I follow! Many people were critical (for very good reason, mind you!), but many praised it (see for yourself). There were a ton of mistakes in the article, and I agree that the authors seemed to have a major axe to grind with the communities involved. I’m a journalist myself, and I would be deeply embarrassed to publish a story with so many errors.
I didn’t claim that the event was controversial solely because of the Guardian article — I also mentioned the ensuing conversation, which includes this extremely commented and voted upon post.
And whether you like it or not, The Guardian is one of the largest newspapers in the world, with half of the traffic of the NY Times!
The obvious reason to not put too much weight on positive survey results from attendees: the selection effect.
There are surely people (e.g. Peter Wildeford, as he mentioned) who would have contributed to and benefited from Manifest but don’t attend because of past and present speaker choices. As others have mentioned, being maximally inclusive will end up excluding people who (justifiably!) don’t want to share space with racists. By including people like Hanania, you’re making an implicit vote that you’d rather have people with racist views than people who wouldn’t attend because of those people. Not a trade I would make.
This is helpful context. I think it is still a bit unsettling that there was a noticeable strain of this type of stuff from the attendees (like if I went to a ticketed party and noticed that 5% of it was into race science somehow, I’d feel uncomfortable and want to leave.)
I think controversial is a totally fair and accurate description of the event given that it was the subject of a very critical story from a major newspaper, which then generated lots of heated commentary online.
And just as a data point, there is a much larger divide between EAs and rationalists in NYC (where I’ve been for 6+ years), and I think this has made the EA community here more welcoming to types of people that the Bay has struggled with. I’ve also heard of so many people who have really negative impressions of EA based on their experiences in the Bay which seem specifically related to elements of the rationalist community/culture.
Idk what caused this to be the case, and I’m not suggesting that rationalists should be purposefully excluded from EA spaces/events, but I think there are major risks to EA to be closely identified with the rationality community.
Thanks for writing this, but apparently the waiver is not totally effective (I have this on good authority, but can’t really say more right now). See this paragraph from the NYT article: “The waiver, announced by Secretary of State Marco Rubio, seemed to allow for the distribution of H.I.V. medications, but whether the waiver extended to preventive drugs or other services offered by the program, the President’s Emergency Plan for AIDS Relief, was not immediately clear.”