Refining My Moral Values Using Steelman Solitaire

Hi everyone! This is my first EA Forum post, outlining a recent upskilling project I completed.

TL;DR: I used steelman solitaire to break down my moral framework and re-evaluate it on a fundamental level. I changed my mind in multiple significant ways and got better at avoiding anchoring bias. I highly recommend trying this project if you are seeking to improve your reasoning skills and/​or refine your moral values.

Background

In May 2023, I was accepted to the February 2024 cohort of the Charity Entrepreneurship Incubation Program. While I have explored moral philosophy as a hobby for many years, I wanted to refine my beliefs before launching a new organization to ensure a solid theoretical foundation for my decision making.[1] In addition, I wanted to improve my critical reasoning and written communication skills.[2] Thus, between June 29, 2023 and January 13, 2024, I spent over 40 hours[3] deconstructing and re-evaluating my moral beliefs.

Methodology

To start out, I outlined topics and questions within moral philosophy that I wanted to explore. For each question, I read the Stanford Encyclopedia of Philosophy page on the topic. For topics without Stanford Encyclopedia pages, I used Wikipedia instead. Sometimes I also utilized non-systematic Google searches, prompted ChatGPT, and had conversations with friends. This was admittedly a somewhat shallow exploration, but I wanted to prioritize breadth over depth since my time was limited and I wanted to explore every topic to some degree.

Once I outlined all the major positions on a topic, I determined my initial leaning (if any). I then engaged in steelman solitaire.[4] This involved developing the strongest possible arguments, counter-arguments, counter-counter-arguments, etc. that I could with the goal of changing my own mind. Once I successfully changed my mind, I would repeat the process and try to change my mind again. Sometimes, this would lead to exploring a new position that was not in my original list, in which case I would write down the new position and then proceed as usual. This process repeated until efforts to change my position reached a point of diminishing returns. I then recorded my “final” result[5] and the number of times that I changed my mind.

The topics I explored are outlined below:

  • Metaethics

    • Do moral statements have truth values?

    • Are some moral statements objectively true?

    • If any moral statements are true (either objectively or subjectively), how can the truth of moral statements be known/​learned?

  • Fundamentals of morality

    • Goals for my moral framework

    • Are good actions inherently good (deontological ethics) or instrumentally good (teleological ethics)?

    • What values (welfare, autonomy, personal virtues, etc.) are fundamentally good?

    • Broadly speaking, how should I use these fundamental values to make decisions?

    • Is there a fundamental difference between acts and omissions?

    • Who qualifies as a moral patient, and how should consideration for moral patients be weighted?

  • Special topics

One of the weaknesses in my methodology is that some of my questions presuppose an answer for other questions. This is explored more in the discussion section of this report.

Results

My overall moral system is outlined below. I include this section primarily to encourage myself to write about nuanced topics with clarity and concision. It also seemed wrong to write about refining my moral system without sharing anything about said moral system. For brevity, I will summarize my current views rather than outlining my entire process for each issue. However, I will discuss some specific cases where my mind changed significantly or my exploration was notable.

Disclaimer: These results are accurate as of the publication of this post, but they are almost guaranteed to shift over time.

Meta-ethics

Do moral statements have truth values?

I do believe moral statements have truth values. However, I briefly convinced myself of universal prescriptivism[6] before coming to my final conclusion.

Are some moral statements objectively true?

I do not believe that any moral statements are objectively true. I believe that moral judgements are dependent on the subjective lenses and frameworks that we apply.

If any moral statements are true (either objectively or subjectively), how can the truth of moral statements be known/​learned?

I believe moral judgements should be based on reason. However, this reasoning needs to be grounded in a set of assumptions, which I believe should be based on intuition.

Fundamentals of morality

Goals for my moral framework

  • Accuracy: My moral framework should be accurate to my most fundamental intuitions (though not necessarily my intuitions about complicated scenarios).

  • Consistency: My moral framework should not contradict itself.

  • Robustness: My moral framework should accommodate for the fact that I am uncertain about many moral issues, and I am either uncertain or mistaken about many non-moral issues.

  • Applicability: My moral framework should give clear guidance for how to make important decisions, even if it is overly complicated for simple ones.

  • Simplicity: Holding all else equal, it is better for my moral framework to have fewer conditions and complicating factors.

Are good actions inherently good (deontological ethics) or instrumentally good (teleological ethics)?

I believe that good actions are instrumentally good.

What values (welfare, autonomy, personal virtues, etc.) are fundamentally good?

  • Welfare

    • I always believed that welfare was fundamental. However, I initially believed that suffering prevention was more important than happiness promotion. I now believe happiness and suffering levels should be weighted based on preferences (as outlined later). As a result, I no longer have any theoretical preference between suffering prevention and happiness promotion.

  • Autonomy

    • I initially did not think that autonomy was fundamentally good beyond its capacity to affect welfare. However, I changed my mind after considering cases when well-informed rational agents choose to do something that I believe is bad for them. In those cases, it seems wrong to stop them, but that contradicts a purely welfarist view.

  • Other values

    • I also considered personal virtues, equality, and justice. I determined that they are good, but only to the extent that they further welfare and autonomy.

    • At one point in this process, I considered justice to be fundamental. Within that belief system, I prioritized the welfare of those who promoted the welfare of others. However, due my leaning towards behavioral determinism,[7] I do not think it makes sense to prioritize anyone’s welfare on a fundamental level. If I change my mind on behavioral determinism, I will likely add justice back into my moral framework.

Broadly speaking, how should I use these fundamental values to make decisions?

After multiple iterations, I defined the following system, which I am fairly satisfied with.

  • 1) I should seek to maximize happiness and minimize suffering.

    • Equivalent amounts of happiness and suffering are determined based upon the level of suffering people would generally endure to receive a given amount of happiness. This may not coincide with equivalent magnitudes of sensation.[8]

    • At one point, I considered some forms of happiness derived from the suffering of others and some forms of suffering derived from the happiness of others to be morally neutral. However, I do not think that this is consistent with my leaning towards behavioral determinism. Similar to my belief about justice as a fundamental value, if I change my mind about behavioral determinism, I will likely reincorporate these ideas into my moral framework.

  • 2) Interference, defined as taking actions that inhibit the choices of others or manipulating others into changing their decisions, should not be done under any of the following circumstances:

    • a) It is possible to achieve the same ends through communication.

    • b) All beings I am trying to help by interfering are well-informed, consenting, and of sound mind.[9]

    • c) There is a reasonable chance that those I am trying to help, if well-informed and of sound mind, would believe that my actions are negative in expectation.

    • d) Both of the following two conditions are met.

      • (i) All beings I am trying to help do not believe in the harm I am trying to protect them from, but they are otherwise well-informed, consenting, and of sound mind.[10]

      • (ii) I can warn them about the potential harm, giving them the option to avoid it.

Is there a fundamental difference between acts and omissions?

While I strongly considered the possibility that my new autonomy restrictions only applied to actions and not omissions, I eventually returned to the belief that omissions are just a type of action.

Who qualifies as a moral patient, and how should consideration for moral patients be weighted?

I believe that a being’s status as a moral patient should be determined on the basis of whether or not they are sentient.[11] All sentient beings should be treated impartially, weighted only according to the magnitude of happiness and suffering they might experience in a given situation.

Special topics

Population ethics

My primary exploration within population ethics was whether the welfare of potential new beings should be treated differently from the welfare of those who are already alive. My initial leaning was towards a person-affecting perspective. I thought that creating happy people was much less significant than making people happy (and possibly even morally neutral).

I now take a totalist perspective. I see no distinction between creating new happy beings and improving the lives of existing beings, assuming the change in total welfare is the same.

What are my moral obligations?

My initial belief was that I am obligated to mostly maximize my impact while leaving a buffer to avoid unsustainability. For the most part this was unchanged, but there is an exception. In extreme circumstances, the marginal gains of sacrificing sustainability could exceed the impact I expect to have over the course of my life. In those situations alone, I believe that I am required to maximize my impact completely.

My biggest revelation on this topic was that I don’t believe moral obligations should be determined based on fundamental principles. Instead, I think they should be decided based on psychology and what will motivate people to do the greatest amount of good.

Moral uncertainty

I started believing that moral uncertainty should be resolved by maximizing expected choiceworthiness. To determine the value of a given outcome under this framework, you find the average value assigned by various moral theories, weighted by your personal credence for each theory. However, this strategy overvalues theories that consider many actions to be maximally good or maximally bad relative to more nuanced theories. I considered five other distinct systems and landed on a cooperative moral parliament system[12] inspired by Newberry and Ord, though conceptualized slightly differently.[13]

To serve as an input for my moral parliament system, I assigned credence values to each of my major moral beliefs. Since I don’t believe there are objective truth values to moral statements, these credence values measure the probability that an infinitely-informed version of myself would converge on a given answer.[14] I will not be sharing my credence values as I don’t think I am well-calibrated enough to publish them confidently. The one credence value I will provide is P(“I won’t change my mind on anything”) ~ 0%.

Discussion

Lessons learned

I have learned a lot from mistakes I made in my process. First, much of the work I engaged in was pretty far removed from my primary goal of informing future decisions. To be clear, some aspects of my exploration included fundamental shifts that could affect big decisions, such as the inclusion of autonomy as a fundamental value or defining my approach to moral uncertainty more explicitly. However, I spent a large amount of time trying to refine smaller pieces that matter less. For example, I studied a few highly specific dilemmas that compare increasing autonomy and preventing a decrease in autonomy in the context of comparing acts and omissions. While that exploration was still valuable for improving my moral reasoning skills, it is relatively unlikely that such subtle nuances will ever be decision-relevant.

Second, despite trying to openly explore all perspectives, I frequently found myself slipping into a pattern of reinforcing my existing beliefs. What I found most surprising was that this tendency did not disappear even after changing my mind on a position more than once. This was a truly valuable lesson on how important it is to be vigilant when engaging in steelman solitaire.

Third, some of my questions seemed to suggest answers to prior questions. For example, “What values are fundamentally good?” implies that good actions are only instrumentally good and could have anchored me to that view. However, one of the questions I wanted to explore was whether good actions are inherently good rather than just instrumentally good. If I would have defined questions in a more neutral way, I could have decreased the level of bias in my thinking.

Finally, I discovered how difficult it is to assign credence values to whether or not I will change my mind about something. This seems primarily due to the fact that almost all unknowns regarding my future beliefs are unknown unknowns, which causes greater difficulty when making predictions.

Future exploration

Given that I don’t believe that moral beliefs should ever be finalized, there is significant room for further exploration with any/​all of the questions I considered. Most notably, my leaning towards behavioral determinism was a crux for two significant pieces of my moral theory. Given that I have put little effort into scrutinizing my beliefs about free will, this is a promising area of exploration. I would also like to refine the credence values from the Moral uncertainty section once I am more well-calibrated.

Beyond refining my existing beliefs, there are some open questions that I did not get the chance to explore within the time constraints of my project. For example, “Are some forms of happiness and suffering categorically more important than others?” and “Should I change my actions in response to threats and/​or attempts to manipulate me using my moral theory?” I also never explicitly compared specific types of happiness and suffering, since this project was largely theoretical. However, it would be good to develop a system of specific moral comparisons to gain a better understanding of what I value in a practical sense. On a related note, I would be interested in undertaking a similar project to re-evaluate my political beliefs or other practical considerations related to morality.

Final reflection

I believe I have achieved all of the goals of this project. I significantly shifted and refined my moral system, and I feel much more prepared to face challenging moral decisions. In addition, I believe I have greatly increased my ability to combat anchoring bias, which has historically been one of the biases I struggle with most. Finally, through writing this report, I have forced myself to reconsider and clarify my beliefs such that they can be understood by others. I think I have done a reasonably good job, but I guess you can be the judge of that.

This project is a great experience for anyone who wants to improve their reasoning skills and/​or refine their moral values, especially if you enjoy philosophy like I do. However, if your main drive is refining your moral values, be careful to ensure that all the questions you consider are actually decision-relevant, and feel free to skip steps as needed.

If you try this project, be careful to actively avoid anchoring bias throughout the entire process. Also, it is best to create a write-up, even if you don’t publish it. In the process of writing this, I discovered two subtle issues with my initial conclusions that I would not have caught otherwise.

Overall, this project has been immensely successful at making me a better moral thinker, and I would absolutely recommend that others give it a try.

  1. ^

    This does not mean that I have finalized my beliefs, as I don’t believe moral beliefs should ever be truly finalized. The beliefs stated are accurate as of this report’s publication.

  2. ^

    This is why I am writing this report, along with wanting an accountability mechanism to ensure I re-evaluate all of my positions closely.

  3. ^

    Excluding any passive thinking as well as the time spent writing this report.

  4. ^

    To outline my arguments, I used Workflowy, a note-taking app that allows you to collapse and rearrange bullet points within an outline. For anyone looking to try this project, I would strongly recommend using Workflowy, since its organization structure is well suited to outlining steelman solitaire.

  5. ^

    This is not an actual final result, as I want to remain open to changing my mind in the future, though it is finalized for the sake of this project.

  6. ^

    Universal prescriptivism is the belief that moral statements are merely imperative statements that apply universally. Under this view, “murder is wrong” has no difference in meaning from “people shouldn’t murder.”

  7. ^

    Behavioral determinism is the view that our behavior is entirely explainable based on our genetics, environment, and past experiences, none of which are in our control.

  8. ^

    I acknowledge that this varies from person to person. Generally speaking, when people’s preferences are not easily discernible or many people are involved, I think averages are a sufficiently good approximation.

  9. ^

    This does not inhibit interference for the sake of one or more third parties. However, the negative welfare effects of interference must be entirely outweighed by the positive welfare effects experienced by third parties.

  10. ^

    If I have access to a unique and reliable source of information that confirms the harm’s existence but can’t be shared, those I am trying to help should not be considered well-informed. In such a case, interference would be warranted. However, if I don’t have access to any special information, those I am trying to help should be considered well-informed, and I should not interfere.

  11. ^

    Thus, the autonomy of all sentient beings should be respected in accordance with the stated interference restrictions.

  12. ^

    For those unfamiliar, a moral parliament is a hypothetical group of 100 people that represent different moral theories. Each theory is represented with a number of delegates that is proportional to your credence in that theory. When you face a decision with moral uncertainty, you imagine that the parliament meets, discusses the issue, and trades votes before determining what action you should take.

  13. ^

    Newberry and Ord recommend setting up specific voting systems that encourage cooperation and force delegates to make tradeoffs with even the smallest minorities within the system. I instead imagine that my delegates seek to be cooperative, so majorities are willing to make sacrifices on rare issues that small minorities care deeply about. I don’t expect that these systems have any noticeable difference in practice, but I prefer imagining a system where people just want to be cooperative without systemic influence.

    However, there may be edge cases in which the two systems differ slightly. In both systems, the minority party has to choose to focus on important issues that occur rarely. If such an issue comes up multiple times in rapid succession, my system would support the minority party each time. The majority delegates are seeking to cooperate and are alright with allowing the minority to win on theoretically rare occasions. However, I am unsure whether Newberry and Ord’s system would do the same. It is possible that under their system, the minority party may be unable to raise sufficient political capital to win each time. I like the way my system utilizes theoretical rarity, though I understand why others would want to use actual rates of occurrence.

  14. ^

    In order to prioritize the decision-relevance of my moral parliament, I assigned zero credence to any belief that I have not considered. However, for many questions, it is highly likely that an infinitely-informed version of myself would find a completely different answer than anything I thought about so far.