One underlying hypothesis that was not explicitly pointed out, I think, was that you are looking for priority arguments. That is, part of your argument is about whether AI safety research is the most important thing you could do (It might be so obvious in an EA meeting or the EA forum that it’s not worth exploring, but I like expliciting the obvious hypotheses).
This is a good point.
Whereas you could argue that without pure mathematics, almost all the positive technological progress we have now (from quantum mechanics to computer science) would not exist.
I feel pretty unsure on this point; for a contradictory perspective you might enjoy this article.
This is a good point.
I feel pretty unsure on this point; for a contradictory perspective you might enjoy this article.
I’m curious about the article, but the link points to nothing. ^^