I’m am on a gap year from studying for a Master of Social Entrepreneurship degree at the University of Southern California.
I have thought along EA lines for as long as I can remember, and I recently wrote the first draft of a book “Ways to Save The World” about my top innovative ideas for broad approaches to reduce existential risk.
I am now doing more research on how these approaches interact with AI X-risk.
Anyone else ever feel a strong discordance between emotional response and cognitive worldview when it comes to EA issues?
Like emotionally I’m like “save the animals! All animals deserve love and protection and we should make sure they can all thrive and be happy with autonomy and evolve toward more intelligent species so we can live together in a diverse human animal utopia, yay big tent EA…”
But logically I’m like “AI and/or other exponential technologies are right around the corner and make animal issues completely immaterial. Anything that detracts from progress on that is a distraction and should be completely and deliberately ignored. Optimally we will build an AI or other system that determines maximum utility per unit of matter, possibly including agency as a factor and quite possibly not, so that we can tile the universe with sentient simulations of whatever the answer is.”
OR, a similar discordance between what was just described and the view that we should also co-optimize for agency, diversity of values and experience, fun, decentralization, etc., EVEN IF that means possibly locking in a state of ~99.9999+percent of possible utility unrealized.
Very frustrating, I usually try to push myself toward my rational conclusion of what is best with a wide girth for uncertainty and epistemic humility, but it feels depressing, painful, and self-de-humanizing to do so.