I lead Effective Altruism Lund in southern Sweden, while wrapping up my M.Sc. in Engineering Physics specializing in machine learning. Iām a social team player who likes high ceilings and big picture work. Scared of AI, intrigued by biorisk, hopeful about animal welfare.
My interests outside of EA, in hieroglyphs: šøš§š¼āāļøššŖš¼ššš¾š®āš¼š¹
Iām sorry if the title was misleading, that was not my intention. I think you and I have different views on the average forum userās population ethics. If I believed that more people reading this had a totalist (or similar) view, I would have been much more up front about my take not being valid for them. Believing the opposite, I put the conclusion youād get from non-person-affecting views as a caveat instead.
That aside, Iād be happy to see the general discourse spell out more that population ethics is a crux for x-risks. Iāve only gottenāand probably at some points givenāthe impression that x-risks are similarly important to other cause areas under all population ethics. This runs the risk of baiting people into working on things they logically shouldnāt believe to be the most pressing problem.
On a personal note, I concede that extinction is much worse than 10 billion humans dying. This is however for non-quantitative reasons. Tegmark has said something along the lines of a universe without sapience being terribly boring, and that weighs quite heavily into my judgement of the disutility of extinction.