I read the first few paragraphs, and there are a few mistakes:
Robert Long’s Lots of links on LaMDAprovides an excellent summary of the saga and the ensuing discussion. We concur with Nick Bostrom’s assessment: “With recent advances in AI (and much more to come before too long, presumably) it is astonishing how neglected this issue still is.”
This strongly suggests that Bostrom is commenting on LaMDA, but he’s discussing “the ethics and political status of digital minds” in general.
Eliezer Yudkowsky’s AGI ruin: a list of lethalities has caused quite a stir. He recently announced that MIRI had pretty much given up on solving AI alignment, and in this (very long) post, he states his reasons for thinking that humanity is therefore doomed.
Yudkowsky did not announce this (and indeed it’s false; see, e.g., Bensinger’s comment), and the “therefore” in the above sentence makes no sense.
Hi Zach, thank you for your comment. I’ll field this one, as I wrote both of the summaries.
This strongly suggests that Bostrom is commenting on LaMDA, but he’s discussing “the ethics and political status of digital minds” in general.
I’m comfortable with this suggestion. Bostrom’s comment was made (i.e. uploaded to nickbostrom.com) the day after the Lemoine story broke. (source: I manage the website).
“[Yudkowsky] recently announced that MIRI had pretty much given up on solving AI alignment”
I chose this phrasing on the basis of the second sentence of the post: “MIRI didn’t solve AGI alignment and at least knows that it didn’t.” Thanks for pointing me to Bensinger’s comment, which I hadn’t seen. I remain confused by how much of the post should be interpreted literally vs tongue-in-cheek. I will add the following note into the summary:
(Edit: Rob Bensinger clarifies in the comments that “MIRI has [not] decided to give up on reducing existential risk from AI.”)
I read the first few paragraphs, and there are a few mistakes:
This strongly suggests that Bostrom is commenting on LaMDA, but he’s discussing “the ethics and political status of digital minds” in general.
Yudkowsky did not announce this (and indeed it’s false; see, e.g., Bensinger’s comment), and the “therefore” in the above sentence makes no sense.
Hi Zach, thank you for your comment. I’ll field this one, as I wrote both of the summaries.
I’m comfortable with this suggestion. Bostrom’s comment was made (i.e. uploaded to nickbostrom.com) the day after the Lemoine story broke. (source: I manage the website).
I chose this phrasing on the basis of the second sentence of the post: “MIRI didn’t solve AGI alignment and at least knows that it didn’t.” Thanks for pointing me to Bensinger’s comment, which I hadn’t seen. I remain confused by how much of the post should be interpreted literally vs tongue-in-cheek. I will add the following note into the summary:
Thanks!