A couple of things make me inclined to disagree with you about whether this will alienate people, including:
1) The reaction on Twitter seems okay so far 2) Over the past few months, I’ve noticed a qualitative shift among non-EA friends/family regarding their concerns about AI; people seem worried 3) Some of the signatories of the FLI letter didn’t seem to be the usual suspects; I have heard one prominent signatory openly criticize EA, so that feels like a shift, too 4) I think smart, reasonable people who have been exposed to ChatGPT but don’t know much about AI—i.e., many TIME readers—intuitively get that “powerful thing we don’t really understand + very rapid progress + lack of regulation/coordination/good policy” is a very dangerous mix
I’d actually be eager to hear more EAs talk about how they became concerned about AI safety, because I was persuaded that this was something we should be paying close attention to over the course of one long conversation, and it would take less convincing today. Maybe we should send this article to a few non-EA friends/family members and see what their reaction is?
So, things have blown up way more than I expected and things are chaotic. Still not sure what will happen or if a treaty is actually in the cards, but I’m beginning to see a path to tons of more investment in alignment potentially. One example why is that Jeff Bezos just followed Eliezer on Twitter and I think it may catch the attention of pretty powerful and rich people who want to see AI go well. We are so off-distribution, could go in any direction.
Yeah, I’m definitely not disputing that some people will be alienated by this. My basic reaction is just: AI safety people are already familiar with EY’s takes; I suspect people like my parents will read this and be like “whoa, this makes some sense and is kind of scary.” (With regard to differing feeds, I just put the link to the article into the Twitter search bar and sorted by latest. I still think the negative responses are a minority.)
Since he has graduated from begging the question to openly stating that blowing things and people up is a reasonable thing to do I think we should dispense with the politeness that has been the norm around these discussions in the past
A couple of things make me inclined to disagree with you about whether this will alienate people, including:
1) The reaction on Twitter seems okay so far
2) Over the past few months, I’ve noticed a qualitative shift among non-EA friends/family regarding their concerns about AI; people seem worried
3) Some of the signatories of the FLI letter didn’t seem to be the usual suspects; I have heard one prominent signatory openly criticize EA, so that feels like a shift, too
4) I think smart, reasonable people who have been exposed to ChatGPT but don’t know much about AI—i.e., many TIME readers—intuitively get that “powerful thing we don’t really understand + very rapid progress + lack of regulation/coordination/good policy” is a very dangerous mix
I’d actually be eager to hear more EAs talk about how they became concerned about AI safety, because I was persuaded that this was something we should be paying close attention to over the course of one long conversation, and it would take less convincing today. Maybe we should send this article to a few non-EA friends/family members and see what their reaction is?
So, things have blown up way more than I expected and things are chaotic. Still not sure what will happen or if a treaty is actually in the cards, but I’m beginning to see a path to tons of more investment in alignment potentially. One example why is that Jeff Bezos just followed Eliezer on Twitter and I think it may catch the attention of pretty powerful and rich people who want to see AI go well. We are so off-distribution, could go in any direction.
Wow, Bezos has indeed just followed Eliezer:
https://twitter.com/BigTechAlert/status/1641659849539833856
Related: “Amazon partners with startup Hugging Face for ChatGPT rival” (Los Angeles Times, Feb 21st 2023)
In case we have very different feeds, here’s a set of tweets critical about the article:
https://twitter.com/mattparlmer/status/1641230149663203330?s=61&t=ryK3X96D_TkGJtvu2rm0uw (lots of quote-tweets on this one)
https://twitter.com/jachiam0/status/1641271197316055041?s=61&t=ryK3X96D_TkGJtvu2rm0uw
https://twitter.com/finbarrtimbers/status/1641266526014803968?s=61&t=ryK3X96D_TkGJtvu2rm0uw
https://twitter.com/plinz/status/1641256720864530432?s=61&t=ryK3X96D_TkGJtvu2rm0uw
https://twitter.com/perrymetzger/status/1641280544007675904?s=61&t=ryK3X96D_TkGJtvu2rm0uw
https://twitter.com/post_alchemist/status/1641274166966996992?s=61&t=ryK3X96D_TkGJtvu2rm0uw
https://twitter.com/keerthanpg/status/1641268756071718913?s=61&t=ryK3X96D_TkGJtvu2rm0uw
https://twitter.com/levi7hart/status/1641261194903445504?s=61&t=ryK3X96D_TkGJtvu2rm0uw
https://twitter.com/luke_metro/status/1641232090036600832?s=61&t=ryK3X96D_TkGJtvu2rm0uw
https://twitter.com/gfodor/status/1641236230611562496?s=61&t=ryK3X96D_TkGJtvu2rm0uw
https://twitter.com/luke_metro/status/1641263301169680386?s=61&t=ryK3X96D_TkGJtvu2rm0uw
https://twitter.com/perrymetzger/status/1641259371568005120?s=61&t=ryK3X96D_TkGJtvu2rm0uw
https://twitter.com/elaifresh/status/1641252322230808577?s=61&t=ryK3X96D_TkGJtvu2rm0uw
https://twitter.com/markovmagnifico/status/1641249417088098304?s=61&t=ryK3X96D_TkGJtvu2rm0uw
https://twitter.com/interpretantion/status/1641274843692691463?s=61&t=ryK3X96D_TkGJtvu2rm0uw
https://twitter.com/lan_dao_/status/1641248437139300352?s=61&t=ryK3X96D_TkGJtvu2rm0uw
https://twitter.com/lan_dao_/status/1641249458053861377?s=61&t=ryK3X96D_TkGJtvu2rm0uw
https://twitter.com/growing_daniel/status/1641246902363766784?s=61&t=ryK3X96D_TkGJtvu2rm0uw
https://twitter.com/alexandrosm/status/1641259179955601408?s=61&t=ryK3X96D_TkGJtvu2rm0uw
Yeah, I’m definitely not disputing that some people will be alienated by this. My basic reaction is just: AI safety people are already familiar with EY’s takes; I suspect people like my parents will read this and be like “whoa, this makes some sense and is kind of scary.” (With regard to differing feeds, I just put the link to the article into the Twitter search bar and sorted by latest. I still think the negative responses are a minority.)
Worth noting that Matt Parlmer has said: