My ethical and philosophical views haven’t changed a huge amount.
I’ve become even less confident in most EA interventions than I was (and I started out very unconfident). I think there are various plausible reasons why most EA activities could easily turn out to be net negative. I don’t know whether I have become more or less confident about research specifically in recent years in absolute terms, but it’s definitely become relatively more appealing (as a relatively robust strategy) as a result.
I’ve become a bit more longtermist in outlook and more uncertain of the sign/effect size of most interventions/projects, mostly due to issues around indirect effects/cluelessness.
Ethics: some years ago I was utilitarian and I pushed myself to do utilitarian things. Then I realized that there are other values that I care about and I tried to specify what they are. Eventually I realized that it’s impossible because there are too many. I then still tried to specify what actions should I push myself to do in order to achieve my vaguely-defined long-term goals. Now I abandoned even that and I just do whatever I want. It didn’t really change much in terms of behaviour. E.g., I still want to never lie. I just don’t think about it in terms of ethics. Also, my mindset is different, more easy-going. Some ethical stances did change though. For example, past-me would’ve pressed a button to create an utilitronium shockwave because it’s a logical conclusion to utilitarianism. Now I wouldn’t press such button because I don’t want to. I don’t claim that this approach to life and ethics is better or correct in any way though, and I don’t know if I should stick to it. If anyone has reasons why I should change it, I’d be curious to read.
What are the most significant ways you’ve changed your mind recently in relation to EA and EA priorities, philosophy and ethics?
My ethical and philosophical views haven’t changed a huge amount.
I’ve become even less confident in most EA interventions than I was (and I started out very unconfident). I think there are various plausible reasons why most EA activities could easily turn out to be net negative. I don’t know whether I have become more or less confident about research specifically in recent years in absolute terms, but it’s definitely become relatively more appealing (as a relatively robust strategy) as a result.
I’ve become a bit more longtermist in outlook and more uncertain of the sign/effect size of most interventions/projects, mostly due to issues around indirect effects/cluelessness.
Ethics: some years ago I was utilitarian and I pushed myself to do utilitarian things. Then I realized that there are other values that I care about and I tried to specify what they are. Eventually I realized that it’s impossible because there are too many. I then still tried to specify what actions should I push myself to do in order to achieve my vaguely-defined long-term goals. Now I abandoned even that and I just do whatever I want. It didn’t really change much in terms of behaviour. E.g., I still want to never lie. I just don’t think about it in terms of ethics. Also, my mindset is different, more easy-going. Some ethical stances did change though. For example, past-me would’ve pressed a button to create an utilitronium shockwave because it’s a logical conclusion to utilitarianism. Now I wouldn’t press such button because I don’t want to. I don’t claim that this approach to life and ethics is better or correct in any way though, and I don’t know if I should stick to it. If anyone has reasons why I should change it, I’d be curious to read.