As a counter point to this, AI causing human extinction can be good if the AI is benevolent
Uh… I think there’s a lot of load-bearing on words ‘benevolent’ and ‘can be’ here[1]
Like I think outside of the most naïve consequentialism it’d be hard to argue that this would be a moral course of action, or that this state of affairs would be best described as ‘benevolent’ - the AI certainly wouldn’t be being ‘benevolent’ toward humanity
Though probably a topic for another post (or dialogue)? Appreciated both yours and Ulrik’s comments above :)
Though probably a topic for another post (or dialogue)? Appreciated both yours and Ulrik’s comments above :)
I agree it is too outside scope to be discussed here, and I do not think I have enough to say to have a dialogue, but I encourage people interested in this to check Matthew Barnett’s related quick take.
Uh… I think there’s a lot of load-bearing on words ‘benevolent’ and ‘can be’ here[1]
Like I think outside of the most naïve consequentialism it’d be hard to argue that this would be a moral course of action, or that this state of affairs would be best described as ‘benevolent’ - the AI certainly wouldn’t be being ‘benevolent’ toward humanity
Though probably a topic for another post (or dialogue)? Appreciated both yours and Ulrik’s comments above :)
And ‘good’, but metaethics will be metaethics
Thanks for the comment, JWS!
I agree it is too outside scope to be discussed here, and I do not think I have enough to say to have a dialogue, but I encourage people interested in this to check Matthew Barnett’s related quick take.