Hi! I’m somewhat new to EA—I’d heard of the ideas years ago but only started engaging with the community recently after doing the Intro to EA Virtual Program.
I work in International Tax Policy and am more sympathetic to neartermist causes such as global health and poverty reduction than longtermist ones.
I read a lot of non-fiction books and summarise them on my website, To Summarise.
Hi—I’m new to the forums and just want to provide some support for your point here. I’ve just completed the 8-week Intro to EA Virtual Program and I definitely got really hung up on the Longtermism and Existential Risk weeks. I’ve spent quite a few hours reading through materials on the Total View and Person-Affecting View and am currently drafting a blog post to work through and organise my thoughts.
I still feel highly sceptical of the Total View, to the point that I’ve been questioning how much I identify with longtermism and even “EA”, more generally, given my scepticism. I personally find some implications of the Total View quite disturbing and potentially dangerous.
So anyway, I just wanted to support your post and also thank you for reminding us that caring about AI alignment and biorisks does not require subscribing to controversial beliefs in moral philosophy.