Ruth Chang is a prominent philosopher known for her work in decision theory, practical reason, and moral philosophy.
She is well known for her theory of “hard choices,” where she argues that many choices are not determined by objective reasons but instead involve values that are incommensurable.
Summary
The podcast discussion delves into the inadequacy of the traditional trichotomous framework—better, worse, or equal—in evaluating values and making decisions. Chang argues for recognizing ‘hard choices’ as situations where options are qualitatively different yet equally viable, introducing the concept of ‘par’. This idea is applied to various scenarios, from career decisions to healthcare dilemmas, and even the design of AI systems. Chang highlights the importance of human agency in making commitments when faced with hard choices, offering a framework to help individuals become the authors of their own lives. Furthermore, Chang shares insights about her current projects aimed at rectifying fundamental misunderstandings about value in AI design, advocating for a more nuanced and human-aligned approach to machine learning.
The episode also touches on the philosophical influences of Derek Parfit and explores concepts like effective altruism, transformative experiences, and the value of commitment in living a meaningful life.
Brief Thoughts
I am not 100% certain I fully understand Chang’s arguments, but it is another way of framing of framing hard choices when making eg impact social / moral decisions. If you look at her papers there are many technical caveats (eg not all choices are hard) but the work is considered substantive. We also briefly touch on EA.
But I thought I’d share it with the EA folk here as food for thought in making decisions. She studied with Parfit as well.
If I understand her correctly, and if she is correct it could be quite important for thinking about AI alignment. It also offers a different path forward for philanthropic giving when the choices seem to be on a par.
How to make hard choices, Ruth Chang
Link to podcast with philosopher Ruth Chang on hard choices decision making https://www.thendobetter.com/arts/2024/8/2/ruth-chang-making-hard-choices-philosophy-agency-commitment-derek-parfit-podcast
Introduction
Ruth Chang is a prominent philosopher known for her work in decision theory, practical reason, and moral philosophy.
She is well known for her theory of “hard choices,” where she argues that many choices are not determined by objective reasons but instead involve values that are incommensurable.
Summary
The podcast discussion delves into the inadequacy of the traditional trichotomous framework—better, worse, or equal—in evaluating values and making decisions. Chang argues for recognizing ‘hard choices’ as situations where options are qualitatively different yet equally viable, introducing the concept of ‘par’. This idea is applied to various scenarios, from career decisions to healthcare dilemmas, and even the design of AI systems. Chang highlights the importance of human agency in making commitments when faced with hard choices, offering a framework to help individuals become the authors of their own lives. Furthermore, Chang shares insights about her current projects aimed at rectifying fundamental misunderstandings about value in AI design, advocating for a more nuanced and human-aligned approach to machine learning.
The episode also touches on the philosophical influences of Derek Parfit and explores concepts like effective altruism, transformative experiences, and the value of commitment in living a meaningful life.
Brief Thoughts
I am not 100% certain I fully understand Chang’s arguments, but it is another way of framing of framing hard choices when making eg impact social / moral decisions. If you look at her papers there are many technical caveats (eg not all choices are hard) but the work is considered substantive. We also briefly touch on EA.
But I thought I’d share it with the EA folk here as food for thought in making decisions. She studied with Parfit as well.
If I understand her correctly, and if she is correct it could be quite important for thinking about AI alignment. It also offers a different path forward for philanthropic giving when the choices seem to be on a par.