That’s not my read? It starts by establishing Edwards as a trusted expert who pays attention to serious risks to humanity, and then contrasts this with students who are “focused on a purely hypothetical risk”. Except the areas Edwards is concerned about (“autonomous weapons that target and kill without human intervention”) are also “purely hypothetical”, as is anything else wiping out humanity.
I read it as an attempt to present the facts accurately but with a tone that is maybe 40% along the continuum from “unsupportive” to “supportive”? Example word choices and phrasings that read as unsupportive to me: “enthralled”, emphasizing that the outcome is “theoretical”, the fixed-pie framing of “prioritize the fight against rogue AI over other threats”, emphasizing Karnofsky’s conflicts of interest in response to a blog post that pre-dates those conflicts, bringing up the Bostrom controversy that isn’t really relevant to the article, and “dorm-room musings accepted at face value in the forums”. But it does end on a positive note, with Luby (the alternative expert) coming around, Edwards in between, and an official class on it at Stanford.
Overall, instead of thinking of the article as trying to be supportive or not, I think it’s mostly trying to promote controversy?
Seems “within tolerance”. Like I guess I would nitpick some stuff, but does it seem egregiously unfair? No.
And in terms of tone, it’s pretty supportive.
That’s not my read? It starts by establishing Edwards as a trusted expert who pays attention to serious risks to humanity, and then contrasts this with students who are “focused on a purely hypothetical risk”. Except the areas Edwards is concerned about (“autonomous weapons that target and kill without human intervention”) are also “purely hypothetical”, as is anything else wiping out humanity.
I read it as an attempt to present the facts accurately but with a tone that is maybe 40% along the continuum from “unsupportive” to “supportive”? Example word choices and phrasings that read as unsupportive to me: “enthralled”, emphasizing that the outcome is “theoretical”, the fixed-pie framing of “prioritize the fight against rogue AI over other threats”, emphasizing Karnofsky’s conflicts of interest in response to a blog post that pre-dates those conflicts, bringing up the Bostrom controversy that isn’t really relevant to the article, and “dorm-room musings accepted at face value in the forums”. But it does end on a positive note, with Luby (the alternative expert) coming around, Edwards in between, and an official class on it at Stanford.
Overall, instead of thinking of the article as trying to be supportive or not, I think it’s mostly trying to promote controversy?