RSS

c.trout

Karma: 96

Philosophy graduate interested in metaphysics, meta-ethics, AI safety and whole bunch of other things. Meta-ethical and moral theories of choice: neo-artistotelian naturalist realism + virtue ethics.

Unvarnished critical (but constructive) feedback is welcome.

[Out-of-date-but-still-sorta-representative-of-my-thoughts hot takes below]

Thinks longtermism rests on a false premise – some sort of total impartiality.

Thinks we should spend a lot more resources trying to delay HLMI – make AGI development uncool. Questions what we really need AGI for anyway. Accepts the epithet “luddite” so long as this is understood to describe someone who: