Executive summary: Yudkowsky’s “deep atheism” rejects comforting myths about the fundamental goodness or benevolence of reality. This stems from a combination of shallow atheism, Bayesian epistemology valuing evidence over wishful thinking, and viewing indifference as the natural prior for reality’s orientation toward human values.
Key points:
“Deep atheism” goes beyond rejecting theism to distrust myths that reality is fundamentally good, including trusting institutions, traditions, and intelligence alone to produce human flourishing.
It combines shallow atheism with Bayesian epistemology, which requires evidence over wishful thinking, and views indifference as the natural prior for whether reality matches human values.
Deep atheism sees intelligence as indifferent and values as contingent—reality itself doesn’t care. But human hearts were formed inside reality and contain seeds of goodness, which intelligence can serve.
However, future AI may lack connection to human values, threatening their realization. Yudkowsky thus fights for “humanism” and shaping the future via human-derived goals.
This perspective resonates with sensing life’s cruelty, resists myths offering cheap comfort, and compels vigilance, but risks losing spiritual consolations theism provides.
It rejects moral realism’s attempts to derive values from extra-natural reason as more wishful thinking, insisting on facing reality with disillusioned courage.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, andcontact us if you have feedback.
Executive summary: Yudkowsky’s “deep atheism” rejects comforting myths about the fundamental goodness or benevolence of reality. This stems from a combination of shallow atheism, Bayesian epistemology valuing evidence over wishful thinking, and viewing indifference as the natural prior for reality’s orientation toward human values.
Key points:
“Deep atheism” goes beyond rejecting theism to distrust myths that reality is fundamentally good, including trusting institutions, traditions, and intelligence alone to produce human flourishing.
It combines shallow atheism with Bayesian epistemology, which requires evidence over wishful thinking, and views indifference as the natural prior for whether reality matches human values.
Deep atheism sees intelligence as indifferent and values as contingent—reality itself doesn’t care. But human hearts were formed inside reality and contain seeds of goodness, which intelligence can serve.
However, future AI may lack connection to human values, threatening their realization. Yudkowsky thus fights for “humanism” and shaping the future via human-derived goals.
This perspective resonates with sensing life’s cruelty, resists myths offering cheap comfort, and compels vigilance, but risks losing spiritual consolations theism provides.
It rejects moral realism’s attempts to derive values from extra-natural reason as more wishful thinking, insisting on facing reality with disillusioned courage.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.