The LessWrong comments here are generally (quite) (brutal), and I think I disagree, which I’ll try to outline very briefly below. But I think it may be generally more fruitful here to ask some questions I had to break down the possible subpoints of disagreement as to the goodness of this letter.
I expected some negative reaction because I know that Elon is generally looked down upon by the EAs that I know, with some solid backing to those claims when it comes to AI given that he cofounded OpenAI, but with the (immediate) (press) (attention) it’s getting in combination with some heavy hitting signatures (including Elon Musk, Stuart Russel, Steve Wozniak (Co-founder, Apple), Andrew Yang, Jaan Tallinn (Co-Founder, Skype, CSER, FLI), Max Tegmark (President, FLI), and Tristan Harris (from The Social Dilemma) among many others) I kind of can’t really see the overall impact of this letter being net negative. At worst it seems mistimed and with technical issues, but at best it seems one of the better calls to action (or global moratoriums as Greg Colbourn put it) that could have happened, given AI’s current presence in the news and much of the world’s psyche.
But I’m not super certain in anything, and generally came away with a lot of questions, here’s a few:
How convergent is this specific call for pause on developing strong language models with how AI x-risk people would go about crafting a verifiable, tangible metric for AI labs to follow to reduce risk? Is this to be seen as a good first step? Or something that might actually be close enough to what we want that we could rally around this metric given its endorsement by this influential group?
This helps clarify the “6 months isn’t enough to develop the safety techniques they detail” objection which was fairly well addressed here as well as the “Should Open AI be at the front” objection.
How much should we view messages that are a bit more geared towards non x-risk AI worries than the community seems to be? They ask a lot of good questions here, but they are also still asking “Should we let machines flood our information channels with propaganda and untruth?” an important question, but one that to me seems to deviate away from AI x-risk concerns.
This is at least tangential to the “This letter felt rushed” objection, because even if you accept it was rushed, the next question is “Well, what’s our bar for how good something has to be before it is put out into the world?”
Are open letters with influential signees impactful? This letter at the very least to me seems to be a neutral at worst, quite impactful at best sort of thing, but I have very little to back that, and honestly can’t recall any specific time I know of where open letters cause significant change at the global/national level.
Given the recent desire to distance from potentially fraught figures, would that mean shying away from a group wide EA endorsement of such a letter because a wild card like Elon is a part of it? I personally don’t think he’s at that level, but I know other EAs who would be apt to characterize him that way.
Do I sign the post? What is the impact of adding signatures with significantly less professional or social clout to such an open letter? Does it promote the message of AI risk as something that matters to everyone? Or would someone look at “Tristan Williams, Tea Brewer” and think “oh, what is he doing on this list?”
1. This is a great first step. Really any kind of half-decent foot in the door is good at this stage, whilst the shock of GPT-4 is still fresh. A much better letter in even two months time would be worse I think.
2. Engendering broad support for a moratorium is good. We don’t need everyone to be behind it for x-risk reasons, but we do need a global majority to be behind it. This is why I’ve said that it might be good if a taboo around AGI development can be inculcated in society—a taboo is stronger than regulation.
3. Would be interested to see data on this.
4. I don’t think this is a significant concern. With broad enough support everyone can have at least a few people they greatly admire on the list.
5. Yes, I think the more signatures, the better. We need the whole world (or at least a large majority of it) to get behind a global moratorium on AGI development!
I tend to agree at a first glance, but when you take into account this counternarrative that has cropped up of “this is just a list of losing AI developers trying to retake control” I wonder if this will trudge on proactively or become fuel to the “the people worried about AI safety are just selfish elitists” fire that Timnit Gebru is always stoking.
Mmm not so sure on this. I think there’s a much stronger “x, who I really don’t like, is involved in this so I won’t involve myself in it” motivation now a days. Twitter is a relevant example here, where Musk joining was enough for many to leave, even if people they still admire and were interested in engaging with were still on the platform. I like the paradigm of “everyone has someone to like so we can all like it” but think today we’ve moved more towards a “distancing from people you don’t like” in a way that makes me wonder if the former is still possible. What do you think about that though?
Cool, will maybe sign then!
Thanks for responding too! Appreciate engagement, it makes thinking about these sorts of things much more worth it.
Yudkowsky’s TIME article is a good counter to this. The blunt, no holds-barred, version of what all the fuss is about.
:)
Thanks for the link, and good that there is precedence.
How many big accounts that threatened to leave Twitter actually have? I’ve seen a lot just continue to threaten to but keep posting. As Elon says, at least it’s not boring. I hope that we’re at a high point of polarisation and things will get better. Maybe the Twitter algorithm being open sourced could be a first step to this (i.e. if social media becomes less polarised, due to anger being downweighted or something, as a result).
The LessWrong comments here are generally (quite) (brutal), and I think I disagree, which I’ll try to outline very briefly below. But I think it may be generally more fruitful here to ask some questions I had to break down the possible subpoints of disagreement as to the goodness of this letter.
I expected some negative reaction because I know that Elon is generally looked down upon by the EAs that I know, with some solid backing to those claims when it comes to AI given that he cofounded OpenAI, but with the (immediate) (press) (attention) it’s getting in combination with some heavy hitting signatures (including Elon Musk, Stuart Russel, Steve Wozniak (Co-founder, Apple), Andrew Yang, Jaan Tallinn (Co-Founder, Skype, CSER, FLI), Max Tegmark (President, FLI), and Tristan Harris (from The Social Dilemma) among many others) I kind of can’t really see the overall impact of this letter being net negative. At worst it seems mistimed and with technical issues, but at best it seems one of the better calls to action (or global moratoriums as Greg Colbourn put it) that could have happened, given AI’s current presence in the news and much of the world’s psyche.
But I’m not super certain in anything, and generally came away with a lot of questions, here’s a few:
How convergent is this specific call for pause on developing strong language models with how AI x-risk people would go about crafting a verifiable, tangible metric for AI labs to follow to reduce risk? Is this to be seen as a good first step? Or something that might actually be close enough to what we want that we could rally around this metric given its endorsement by this influential group?
This helps clarify the “6 months isn’t enough to develop the safety techniques they detail” objection which was fairly well addressed here as well as the “Should Open AI be at the front” objection.
How much should we view messages that are a bit more geared towards non x-risk AI worries than the community seems to be? They ask a lot of good questions here, but they are also still asking “Should we let machines flood our information channels with propaganda and untruth?” an important question, but one that to me seems to deviate away from AI x-risk concerns.
This is at least tangential to the “This letter felt rushed” objection, because even if you accept it was rushed, the next question is “Well, what’s our bar for how good something has to be before it is put out into the world?”
Are open letters with influential signees impactful? This letter at the very least to me seems to be a neutral at worst, quite impactful at best sort of thing, but I have very little to back that, and honestly can’t recall any specific time I know of where open letters cause significant change at the global/national level.
Given the recent desire to distance from potentially fraught figures, would that mean shying away from a group wide EA endorsement of such a letter because a wild card like Elon is a part of it? I personally don’t think he’s at that level, but I know other EAs who would be apt to characterize him that way.
Do I sign the post? What is the impact of adding signatures with significantly less professional or social clout to such an open letter? Does it promote the message of AI risk as something that matters to everyone? Or would someone look at “Tristan Williams, Tea Brewer” and think “oh, what is he doing on this list?”
I’ll have a go at answering your questions:
1. This is a great first step. Really any kind of half-decent foot in the door is good at this stage, whilst the shock of GPT-4 is still fresh. A much better letter in even two months time would be worse I think.
2. Engendering broad support for a moratorium is good. We don’t need everyone to be behind it for x-risk reasons, but we do need a global majority to be behind it. This is why I’ve said that it might be good if a taboo around AGI development can be inculcated in society—a taboo is stronger than regulation.
3. Would be interested to see data on this.
4. I don’t think this is a significant concern. With broad enough support everyone can have at least a few people they greatly admire on the list.
5. Yes, I think the more signatures, the better. We need the whole world (or at least a large majority of it) to get behind a global moratorium on AGI development!
I tend to agree at a first glance, but when you take into account this counternarrative that has cropped up of “this is just a list of losing AI developers trying to retake control” I wonder if this will trudge on proactively or become fuel to the “the people worried about AI safety are just selfish elitists” fire that Timnit Gebru is always stoking.
I think I just flatly agree here.
Someone from lesswrong mentioned the Letter of three hundred which I’d like to check out in this context.
Mmm not so sure on this. I think there’s a much stronger “x, who I really don’t like, is involved in this so I won’t involve myself in it” motivation now a days. Twitter is a relevant example here, where Musk joining was enough for many to leave, even if people they still admire and were interested in engaging with were still on the platform. I like the paradigm of “everyone has someone to like so we can all like it” but think today we’ve moved more towards a “distancing from people you don’t like” in a way that makes me wonder if the former is still possible. What do you think about that though?
Cool, will maybe sign then!
Thanks for responding too! Appreciate engagement, it makes thinking about these sorts of things much more worth it.
Yudkowsky’s TIME article is a good counter to this. The blunt, no holds-barred, version of what all the fuss is about.
:)
Thanks for the link, and good that there is precedence.
How many big accounts that threatened to leave Twitter actually have? I’ve seen a lot just continue to threaten to but keep posting. As Elon says, at least it’s not boring. I hope that we’re at a high point of polarisation and things will get better. Maybe the Twitter algorithm being open sourced could be a first step to this (i.e. if social media becomes less polarised, due to anger being downweighted or something, as a result).
Great :)