I think it would be helpful for philosophers to think about those problems specifically in the context of AI alignment.
That makes sense; agree there’s lots of work to do there.
Any chance you could discuss this issue with her and perhaps suggest adding working on technical AI safety as an option that EA-aligned philosophers or people with philosophy backgrounds should strongly consider?
That makes sense; agree there’s lots of work to do there.
Have sent an email! :)