Hi Zach, thank you for your comment. I’ll field this one, as I wrote both of the summaries.
This strongly suggests that Bostrom is commenting on LaMDA, but he’s discussing “the ethics and political status of digital minds” in general.
I’m comfortable with this suggestion. Bostrom’s comment was made (i.e. uploaded to nickbostrom.com) the day after the Lemoine story broke. (source: I manage the website).
“[Yudkowsky] recently announced that MIRI had pretty much given up on solving AI alignment”
I chose this phrasing on the basis of the second sentence of the post: “MIRI didn’t solve AGI alignment and at least knows that it didn’t.” Thanks for pointing me to Bensinger’s comment, which I hadn’t seen. I remain confused by how much of the post should be interpreted literally vs tongue-in-cheek. I will add the following note into the summary:
(Edit: Rob Bensinger clarifies in the comments that “MIRI has [not] decided to give up on reducing existential risk from AI.”)
Hi Zach, thank you for your comment. I’ll field this one, as I wrote both of the summaries.
I’m comfortable with this suggestion. Bostrom’s comment was made (i.e. uploaded to nickbostrom.com) the day after the Lemoine story broke. (source: I manage the website).
I chose this phrasing on the basis of the second sentence of the post: “MIRI didn’t solve AGI alignment and at least knows that it didn’t.” Thanks for pointing me to Bensinger’s comment, which I hadn’t seen. I remain confused by how much of the post should be interpreted literally vs tongue-in-cheek. I will add the following note into the summary:
Thanks!