But I often see important and controversial beliefs in moral philosophy thrown around in introductory EA material (introductory pitches and intro fellowships especially), like strong longtermism, the astronomical waste argument, valuing future people equally to currently existing people, etc. And I think this is unnecessary and should be done less often, and makes these introductions significantly less effective.
Hi—I’m new to the forums and just want to provide some support for your point here. I’ve just completed the 8-week Intro to EA Virtual Program and I definitely got really hung up on the Longtermism and Existential Risk weeks. I’ve spent quite a few hours reading through materials on the Total View and Person-Affecting View and am currently drafting a blog post to work through and organise my thoughts.
I still feel highly sceptical of the Total View, to the point that I’ve been questioning how much I identify with longtermism and even “EA”, more generally, given my scepticism. I personally find some implications of the Total View quite disturbing and potentially dangerous.
So anyway, I just wanted to support your post and also thank you for reminding us that caring about AI alignment and biorisks does not require subscribing to controversial beliefs in moral philosophy.
Hi—I’m new to the forums and just want to provide some support for your point here. I’ve just completed the 8-week Intro to EA Virtual Program and I definitely got really hung up on the Longtermism and Existential Risk weeks. I’ve spent quite a few hours reading through materials on the Total View and Person-Affecting View and am currently drafting a blog post to work through and organise my thoughts.
I still feel highly sceptical of the Total View, to the point that I’ve been questioning how much I identify with longtermism and even “EA”, more generally, given my scepticism. I personally find some implications of the Total View quite disturbing and potentially dangerous.
So anyway, I just wanted to support your post and also thank you for reminding us that caring about AI alignment and biorisks does not require subscribing to controversial beliefs in moral philosophy.