đŸ‡ș🇳We can make world leaders agree on moral values THIS YEAR. (according to experts I spoke to)

Introduction

The idea

The idea is to get the UN to facilitate agreement amongst world leaders on what moral philosophy they should make decisions on, and this article expands on that idea.

Preface

I’m mostly asking for feedback[1], assistance [contacting the UN system about this], help actually starting this project (if your job is relevant) and any information (such as if any world leaders don’t seem willing to change their moral values, relevant phycology, how the UN implements projects, how slow of a process this might be, etc.) that can aide in deciding how (Or, I suppose, if) this project should be implemented.

Why this might be good

As I am sure you know, everyone makes decisions based on three things:

  1. The information they have,

  2. Their decision-making process (which is often assumed to be their values, such as in game theory, and when it comes to geopolitics, there’s much less human error than in day-to-day life, so it should roughly be their values. and divergence from that is human error.)

  3. Their options.

This is because when a person decides something, they use the information they have (1) and their decision-making process (2), and ONLY based on those things do they choose one of their options (3).

It’s abundantly clear why it would be good for world leaders to agree on what they value[2]: said world leaders would always want the same thing, assuming they have the same or similar enough information and human error doesn’t get in the way.

Why this might work:

Most people are aligned not with their goals at the moment, but rather with their goals overall. That’s phrased a bit weirdly, so I’ll expand on it:

Someone might be a member of the Democratic party at one time, but they probably wouldn’t take a pill that made them always hold the opinions of a member of the Democratic party of that time.

  1. If this isn’t caused by human error, it must be caused by their values.

  2. If their values only cared about their opinions and beliefs at that time, then they would take the pill.

  3. Since they likely wouldn’t, their values must also care, in part, about their future opinions or beliefs.

In addition, most people wouldn’t take a pill that made them highly addicted to ice cubes.

4. If this isn’t caused by human error, it must be caused by their values.

5. If their values only care about satisfying whatever opinions and beliefs they have at the time, then they would take the pill since they could easily satisfy [the belief that they would hold, in the future] that eating ice cubes is extremely valuable by going to the fridge and grabbing a handful of ice cubes.

6. Since they don’t eat the cube, they must not entirely only care about whatever opinions and beliefs they hold at the time.

IF a person wouldn’t take the pill in the first case, but would in the second, and IF it isn’t caused by human error, then statements 3 and 6 MUST be satisfied.

In order to satisfy statements 3 and 6, said person’s true values could be many things. Three very reasonable possibilities are:

  1. They value doing good, and their beliefs and opinions change as they think about them more. (e.g., someone might switch from acting like a nihilist to acting like a utilitarian since that better aligns with their values. They allow THIS change because they know that some reasonable process caused said change. If they thought they would start going crazy soon, they might go through steps to stop a change in their values since they wouldn’t trust the decision of a crazy person, even when the crazy person is them.)

  2. They care about making decisions based on logical reasoning and “reasonable” values. Valuing ice cubes so highly is not “reasonable”.

  3. More generally, they might have one overarching goal (for example, “doing good”), and they change their opinions and beliefs to better align with that goal.

This may be the case for very impactful people, and so any change to their values, when based on logical reasoning (and with THEIR consent, so they know that it meets THEIR goals (since I don’t think any of them are actually crazy, but I don’t know. If they are crazy, though, or if they often succumb to human error, they still would likely only change their values with THEIR consent.)), they would be welcome!

  1. In addition, not only would world leaders end up with moral values that are more logical, but they would end up with moral values that many more important people agree with!

  2. In addition, world leaders might hold off on major decisions since they know that, on average, they would make a more educated decision after their moral values improve and align with others. (🚹🚹🚹IMPORTANT NOTE HERE: if a person in power knew that something such as this had a chance of occurring, they might hold off on major decisions[3]. So: if you have the ability to contact someone who might be considering a major decision, PLEASE consider telling me[4] to tell them about it (or otherwise get them to know), or enough about this for them to hold off on said decision. But make sure they wouldn’t use that information for bad (e.g., if they are morally aligned with their current values or otherwise don’t trust the UN, they might use their knowledge of this project to try and stop it from occurring, or atleast stopping it from applying to them.)

Why this might not work/​factors that might cause this to not be implemented/​factors that might make this a bad idea:

It might be too slow

One major issue is that the process might be too slow. Maybe it won’t be! I honestly don’t know. Maybe there’s some study on how long it takes to change the mind of someone who sees their opinion as important, and that might be useful in determining how long this would take.

I will note that this program can help plenty of world leaders decide on a moral value simultaneously (under some methods of the project being implemented), which could make the process much faster.

Certain assumptions might not be met

Another potential way this wouldn’t work or be implemented is that many world leaders don’t match the reasonable assumptions behind [the reasoning as to why it might work].

It might be too hard

Another one is that it might be very hard to convince everybody that it is important, especially if we define “important” more loosely, allowing more people to fit the description, ESPECIALLY if it needs cooperation from a large group, such as the citizens of a nation, especially if the moral values go against that group’s culture. Imagine you’re heavily a christian hearing that the UN decided that coveting one’s neighbor’s wife is really not that bad (and they mentioned that explicitly in a summary of the program report). Notably, many of these bigger groups are heavily influenced by smaller groups; Most unions have union leaders or union leader bodies, most armies have generals of differing ranks, most political groups have figureheads, most religions have priests or the equivalent of a priest, etc.

Adverse incentives

The issue

  1. Due to all the dynamics of politics, people in power are disproportionately not morally aligned: someone who values being in office the longest would, on average, be in office longer than someone who wants to do good, and those who are willing to become more morally aligned would disproportionately be put at a disadvantage, since this program would be more likely to make them more moral, and thus in power for less time in comparison to those who were less willing to budge: A change from mostly moral to moral might be the straw that broke the camel’s back, causing them to be in power for much less time.[5]

  2. This decrease might be especially extreme if their keys to power think that them being more moral is so bad that they’d need to replace them. For example, a country’s president might strongly disagree with the morals of what was settled on, and would replace an ambassador who attended the program. (This provides further reason as to why the keys to power of those in power should go through such a program.)

Both of these provide reason as to why a potential participant or person effected by the program might actively try to stop it.

Some counteracting forces

  1. This can be counteracted by having some of these dynamics of politics push towards being moral: moral world leaders might do better in a world filled with other moral world leaders than [immoral world leaders].

  2. Furthermore, you don’t need to value staying in power to stay in power or try to. Suppose you’re a world leader with some moral values. In that case, you’d want to stay in power when the alternative is less moral than you.

  3. In addition to THIS, one of the main reasons moral world leaders do seemingly immoral things is to fend off less moral world leaders from taking their power. This force would be drastically counteracted by [the UN and most world leaders agreeing on a moral philosophy that they act upon.].

  4. IN ADDITION, a person in a position of power could act the same way as an immoral version of themselves, except for when being moral doesn’t have a noticeable effect on how much power they have. (This is practically the bare minimum, namely since it would mean that a program like this would only have an effect in those edge-cases.)

  5. IN ADDITION, if world peace (or something similar) was achieved under said person’s leadership/​[being in power], that would be a major boost to any campaigns that they endorse (for campaigns in democracies), it could make them seem like a better fit for most roles, it would boost their image, and more!

We’ve been trying part of this for a while

One glaring flaw is that it would be extremely difficult to land on the correct moral philosophy. Philosophers have been trying for years, and there still isn’t a consensus!

One counterargument to this is that it doesn’t need to be the RIGHT moral philosophy; it just has to be GOOD ENOUGH to be better than the current status quo, which is much less difficult.

Miscellaneous

  1. One general note is that the UN sort of already has this: the UN charter. However, it isn’t enforced in this way, and major member states don’t abide by it, or otherwise didn’t in the past, such as Russia. They might want to apply the techniques proposed here, but not all of them would work if the final moral goal is set in stone: No-one can change it such that they agree with it, so if the charter doesn’t agree with their values, No amount of convincing will change their mind, unless you change their fundamental values (something pretty hard to do—imagine how much convincing I’d have to do to get you to stop doing good!)

  2. Presumably, since important people become more and less important over time, either due to them being elected, hired, resigning, dying, etc., this program would presumably continue throughout time (to get all the new world leaders and non-new world leaders to agree on a moral philosophy), or perhaps the UN would fully agree on one moral philosophy, or perhaps every few years world leaders and experts convene to decide if it should change, and if so, what it should change to. (This is one of the main ways this idea can be improved: “How should this be implemented long-term”?)

If you have any questions, feel free to ask![6]

I likely forgot about some variables that can be changed to make this idea better, as well as variables that could effect whether or not this is a good idea, so please let me know if you spot any, or if you know what those variables are “equal to”. (that is, what should be adjusted, and what are the the real-world features that effect of this is a good or bad idea?)

  1. ^

    So far,

    The following people have given me feedback:

    1. 4 non-experts.

    2. 1 international relations expert

    3. Arturo Marcias

    4. Christopher Canal

    5. At least 4 people who work at the UN[7]

    6. Roughly 6 people quickly talked about it on the call, and none of them said it was crazy, either. They also gave a few notes before we switched topics.

  2. ^

    I will note that there are a bunc of different extents to hoe succesful this could be. Here are some (somewhat rushed) examples:

    1. Scale & Moral success: this project results in all people (outside of the very occasional bad apple) agreeing exactly on what moral system to use such that no one would disagree on any decision unless they had different information or if human error got in the way, AND this moral system is the fundamentally correct one.

      1. The general impact of this would basically be that all of humanity would soon live in a utopia, and the world would basically be optimal, putting aside human error.

      2. This seems to be one of if not THE hardest option, but I don’t know the specifics.

    2. Moral success: Most world leaders agree on the fundamentally correct moral system.

      1. The impact of this could be that most people live as normal, content in the knowledge that the world is much safer, but there are some cases where non-world leaders might be empowered to cooperate against a beter world order, perhaps through strikes, protests, or worse (which is potentialy much less likely) (e.g., governments losing a monopoly on power.), or perhaps smaller groups of people or individuals might try to interfere negatively through the common methods used throughout history. On the other hand, it might sort itself out and such a positive scenario might empower greater cooperation between good-doers, which seems like a more reasonable scenario. In addition, in many of these scenarios, such better leaders might encourage people to be more moral and logical.

      2. This one also seems very difficult, since landing on the exact crrect moral philosophy is something that philosophers have been working on for at least two millenia by now—of course, progress and the rate of good ideas and whatnot has dramatically increased in very recent history, so it’s a possibility that shouldn’t be ruled out.

    3. varied moral success: Most world leaders agree on most things, but there are some edge-cases where their slightly differing values make them have opposing interests on select issues.

      1. This probably will have less of a unionizing force on less powerful bad actors, since the situation is less compromizing to some goals bad actors might have; it might have a comperable unionizing force umongst good-doers. However, world peace might not be achieved here, though I imagine that, in many of these scenarios, it is easy to improve the scenario to one of the better ones. (In this case, 5 might be pretty achieveable.)

      2. This seems like one of the most likely scenarios, besides the scenario where this can’t or wouldn’t be implemented.

    4. less moral success: Like scenario 2, but with a “sub-par” moral system.

      1. This probably will have less of a unionizing force on less powerful bad actors, since the situation is less compromizing to some goals bad actors might have; it might also have less of a unionizing force umongst good-doers. While it might cause a world peace equivelant, the world might head in a slightly or largely more sub-par or misaligned long-term future for humanity.

      2. This also seems like one of the most likely scenarios, besides the scenario where this can’t or wouldn’t be implemented.

    5. moral stepping-stone success: World leaders agree that there is some correct moral system for which decisions should be made, but they might still disagree on some decisions.

      1. This might cause much less disagreement umongst world leaders, and world leaders would mostly always agree tat it is to find out what the right decision is, and thus cooperate to find what the right decision is.

      2. This seems like one of the easiest ways to get the ball rolling. In an emergency, such as a repeat of the cuban missile crisis, this seems like the best strategy for an emergency implementation, but any of them could result in world leaders holding off on decisions such as sending nukes, even if they were only announced.

    6. A version of 3 which works on all people.

      1. This might cause greater cooperation on many sides, which could be good or bad, depending on specifics.

      2. This seems possible, though I doubt the UN would play much of a direct role here. If this were to happen, I would imagine it would mostly arise from a general cultural shift that encourages people to think about their values. There certainly is precedent of major news having major cultural impacts, such as the events of 2001 that are so ubiquitous that the year is often associated with the tragedy.

    7. A version of 4 which works on all people.

      1. This might cause greater cooperation on many sides, which could be good or bad, depending on specifics.

      2. This seems much less possible given the sheer number of people and possibility for large groups of people who would reject a sub-par moral system, though I doubt the UN would play much of a direct role here. If this were to happen, I would imagine it would mostly arise from a very, very major cultural shift that encourages people to think about their values. I am unaware of any precedent of such a large cultural shift.

    8. A version of 5 which works on all people.

      1. This might cause greater cooperation on many sides, which could be good or bad, depending on specifics. It also mght make arguments resolve much better, and it could easily result in making up-and-coming world leaders already agree with certain moral values before rising to power. (This one applies to all of the ones that work on the general population.)

      2. This actually does seem like a reasonable possibility. Many moral values are already pretty widespread, even if the logic behing them isn’t. One of the most noteable ones is “the golden rule”. Something like this could certainly come as a result of the UN, especially since it agrees with most moral philosophies, (Namely because, for most moral philosophies, one could think “One moral philosophy is fundamentally the right one, and that one is MY moral philosophy.) and thus its widespread acceptance doesn’t have to provide reasoning as to why other moral philosophies are wrong.

    Generally, there are many easy-ish (specifics-dependent) ways of improving cooperation umongst good-doers and decreasing cooperation umongst bad actors, so it might be worth weighting those factors less as a result of the potential ease of controlling these factors through other means.

    I will re-enphasize that this footnote doesn’t account for every scenario, and is really not that comprehensive. It mainly provides a jumping-off point for which to develop specific parts of the project so as to lead to specific results, and as a jumping-off point to figure out what parts of the program might result in what outcomes, and how good said outcomes are/​may be.

  3. ^

    This is because something like this is so major that it might impact their decision, so they might hold off on a decision until they can make a more educated one.

  4. ^

    I’d appreciate if you tell me so I can keep track of who knows.

    If you think someone else should know instead of me, AND that they should know who knows, then please let me know, so I can send them [the info of who I know (are the people who know about it)].

    If you want certain restrictions on [who I send any info on this to], please let me know. I’d appreciate reasoning as to why.

  5. ^

    Note that this is often not the people in power’s fault.

  6. ^

    To be clear: if you have ANY questions, please ask. Interpret this article the same way you would as if there was a version of it that included all the answers to any given question, and you may request access to see what the answer is. If there is a typo, for example, don’t interpret this article as though said typo was intentional; interpret it as though there was a footnote right above the typo that said “this is a typo. I meant to say ___”. In terms of game theory and soft power and deciding on a Nash equilibrium and whatnot, note that if anything is unclear, everyone else will have asked “hey, can you clarify this?”, and they would do so both for clarification and because they would want to know what clarification everyone else got.

  7. ^

    They didn’t give comments on how to improve, but they didn’t say the idea was crazy, and they definitely read it.