Philosophy graduate interested in metaphysics, meta-ethics, AI safety and whole bunch of other things. Meta-ethical and moral theories of choice: neo-artistotelian naturalist realism + virtue ethics.
Unvarnished critical (but constructive) feedback is welcome.
[Out-of-date-but-still-sorta-representative-of-my-thoughts hot takes below]
Thinks longtermism rests on a false premise – some sort of total impartiality.
Thinks we should spend a lot more resources trying to delay HLMI – make AGI development uncool. Questions what we really need AGI for anyway. Accepts the epithet “luddite” so long as this is understood to describe someone who:
suspects that on net, technological progress yields diminishing returns in human flourishing.
OR believes workers have a right to organize to defend their interests (you know – what the original Luddites were doing). Fighting to uphold higher working standards is to be on the front lines fighting against Moloch (see e.g. Fleming’s vanishing economy dilemma and how decreased working hours offers a simple solution).
OR suspects that, with regards to AI, the Luddite fallacy may not be a fallacy: AI really could lead to wide-spread permanent technological unemployment, and that might not be a good thing.
OR considering the common-sensey thought that societies have a maxmimum rate of adaptation, suspects excessive rates of technological change can lead to harms, independent of how the technology is used. (This thought is more speculative/less researched – would love to hear evidence for or against).
You’re right that post-hoc articles are usually full of hindsight bias, making them a lot less valuable. That’s why I tried not to make the article about SBF too much (no this is not part 1 of a series). I laid that out from the beginning:
If you want a prediction I give one right after this:
I reiterate this when I say “I fear it is widespread in this community” where “it” is a certain coldness toward ethical choices (and other choices that would normally be full of affect).
SBF is topical and I thought this was a good opportunity to highlight this lesson about not engaging in excessive reasoning. But I agree my title isn’t great. Suggestions?