I was surprised to read this from Peter Singer, a thoroughgoing utilitarian who I often see as a little extreme in how EA his beliefs are. I don’t particularly agree with this conclusion:
When taking steps to reduce the risk that we will become extinct, we should focus on means that also further the interests of present and near-future people. If we are at the hinge of history, enabling people to escape poverty and get an education is as likely to move things in the right direction as almost anything else we might do
It seems extremely unlikely to me that global poverty is just as good at reducing existential risk as things that are more targeted, such as AI safety research. MichaelStJules writes more about this in his comment here.
Nevertheless, I found it valuable to see how Peter Singer views longtermism.
Elistratov writes more on Peter Singer’s thoughts on existential risk here.
I was surprised to read this from Peter Singer, a thoroughgoing utilitarian who I often see as a little extreme in how EA his beliefs are. I don’t particularly agree with this conclusion:
It seems extremely unlikely to me that global poverty is just as good at reducing existential risk as things that are more targeted, such as AI safety research. MichaelStJules writes more about this in his comment here.
Nevertheless, I found it valuable to see how Peter Singer views longtermism.
Elistratov writes more on Peter Singer’s thoughts on existential risk here.