When I write âskepticism of formal philosophyâ, I more precisely mean âskepticism that philosophical principles can capture all of whatâs intuitively importantâ. Hereâs an example of skepticism of formal philosophy from Scott Alexanderâs review of What We Owe The Future:
Iâm not sure I want to play the philosophy game. Maybe MacAskill can come up with some clever proof that the commitments I list above imply I have to have my eyes pecked out by angry seagulls or something. If thatâs true, I will just not do that, and switch to some other set of axioms. If I canât find any system of axioms that doesnât do something terrible when extended to infinity, I will just refuse to extend things to infinity...I realize this is âanti-intellectualâ and âdefeating the entire point of philosophyâ.
You make a good point regarding the relative niche-ness of animal welfare and AI x-risk. I agree that my postâs analogy is crude and there are many reasons why peopleâs dispositions might favor AI x-risk reduction over animal welfare.
Thanks for the compliment :)
When I write âskepticism of formal philosophyâ, I more precisely mean âskepticism that philosophical principles can capture all of whatâs intuitively importantâ. Hereâs an example of skepticism of formal philosophy from Scott Alexanderâs review of What We Owe The Future:
You make a good point regarding the relative niche-ness of animal welfare and AI x-risk. I agree that my postâs analogy is crude and there are many reasons why peopleâs dispositions might favor AI x-risk reduction over animal welfare.