I was surprised to kind of like this letter (I’ve not read the Wired article people complained about). I disagreed with a good amount of it, including the overall conclusions, but:
I felt it was earnest, and I appreciated that.
I think it’s asking a number of good questions. EA as a frame doesn’t naturally generate some of these questions, but it’s generally healthy to look at things from lots of angles—and I suspect that in particular that in places it’s managing to highlight what may be relative blindspots of EA: and it’s great to have attention on what the blindspots may be, even if on net you don’t think they’re necessarily worth correcting.
I’d feel kind of good if this was included in some reading lists for people exploring EA? (Probably along with a good reply that I expect someone will write.)
I was surprised to kind of like this letter (I’ve not read the Wired article people complained about). I disagreed with a good amount of it, including the overall conclusions, but:
I felt it was earnest, and I appreciated that.
I think it’s asking a number of good questions. EA as a frame doesn’t naturally generate some of these questions, but it’s generally healthy to look at things from lots of angles—and I suspect that in particular that in places it’s managing to highlight what may be relative blindspots of EA: and it’s great to have attention on what the blindspots may be, even if on net you don’t think they’re necessarily worth correcting.
I’d feel kind of good if this was included in some reading lists for people exploring EA? (Probably along with a good reply that I expect someone will write.)