Below is a shortened version of the original post submitted for the competition. See the post content for newer material.
--------
TL;DR
Keep and use the word belief in your vocabulary distinct from your use of hypothesis or conclusion.
Unweighted beliefs are useful distinctions to employ in your life.
Constrain beliefs rather than decrease your epistemic confidence in them.
Challenge beliefs to engage with underlying (true) knowledge.
Take additional actions to ensure preconditions for the consequences of your actions.
Match ontologies with real-world information to create knowledge used to make predictions.
Use an algebraic scoring of altruism and selfishness of actions to develop some simple thought experiments.
My red team critique is that EA folks can effectively rely on unweighted beliefs where they now prefer to use Bayesian probabilities.
Introduction: Effective Altruists want to be altruistic
EA community members want to improve the welfare of others through allocation of earnings or work toward charitable causes. Effective Altruists just want to do good better.
There’s a couple ways to know that your actions are actually altruistic:
Believe that your actions are altruistic.
Confirm your actions are altruistic before you believe it.
I believe that EA folks confirm the consequences of their altruistic actions in the near term. For example, they might rely on:
expert opinion
careful research
financial audits
cost-effectiveness models
big-picture analyses
and plausibly other processes to ensure that their charities do what they claim.
The essentials of my red team critique
Bayesian subjective probabilities don’t substitute for unweighted beliefs
The community relies on Bayesian subjective probabilities when I would simply rely on unweighted beliefs. Unweighted beliefs let you represent assertions that you consider true.
Why can’t Bayesian probabilities substitute for all statements of belief? Because:
Humans do not consciously access all their beliefs or reliably recall formative evidence for every belief that they can articulate. I consider this self-evident.
Beliefs are important to human evaluation of morality, especially felt beliefs. I will explore this in a couple of thought experiments about EA guilt.
Beliefs can represent acceptance of a conclusion or validation of a hypothesis, but sometimes they appear as seemingly irrational feelings or intuitions.
Any strong assertion that you make quickly is a belief. For example, as a medic, you could say “Look, this epipen will help!” to someone guarding their anaphylactic friend from you.
Distinguish beliefs from conclusions from hypotheses
Use the concept of an unweighted belief in discussion of a person’s knowledge. Distinguish beliefs from conclusions from hypotheses, like:
Belief Qualifiers: “I believe that X.”, “X, or so I believe.”, “X.”
Conclusion Qualifiers: “Therefore X.”, “I conclude that X.”, “X is my conclusion.”
Hypothesis Qualifiers: “I hypothesize that X.”, “X, with 99% confidence.”
I don’t mean to limit your options. These are just examples.
Challenge the ontological knowledge implicit in a belief
If I say, “I believe that buying bednets is not effective in preventing malaria.” you can respond:
“Why do you believe that?”
“What led you to that conclusion?”
“Is that your theory?”
“Based on what evidence?”
I believe that you should elicit ontology and knowledge from whom you challenge, rather than letting the discussion devolve into exchanges of probability estimates.
Partial list of relationship types in ontologies
You will want information about entities that participate in relationships like:
part-whole relationships: an entity is part of another entity
meaning relationships: a defining, entailment, or referent relationship
causal relationships: a causal relationship, necessary, sufficient, or both
set-subset relationships: a relationship between sets
I will focus on causal relationships in a few examples below. I gloss over the other types of relationships though they deserve a full treatment.
A quick introduction to ontologies and knowledge (graphs)
An ontology is a list of labels for things that could exist and some relationships between them. A knowledge graph instantiates the labels of the relationship. The term knowledge graph is confusing, so I will use the word knowledge as a shorthand. For example:
Ontology:Tipping_Points cause Bad_Happenings, where Tipping_Points and Bad_Happenings are the labels, and the relationship is causes.
Knowledge: “Melting Arctic Ice causes Ocean Heating At the North Pole.” where Melting Arctic Ice instantiates Tipping_Points and Ocean Heating At The North Pole instantiates Bad_Happenings.
I am discussing beliefs about the possible existence of things or possible causes of events in this red team. So do EA folks. Longtermists in Effective Altruism believe in making people happy and making happy people.”Separate from possibilities or costs of longtermist futures is the concept of moral circles, meant to contain beings with moral status. As you widen your moral circle, you include more beings of more types in your moral calculations. These beings occupy a place in an ontology of beings said to exist at some point in present or future time[1]. Longtermists believe that future people have moral status. An open question is whether they actually believe that those future people will necessarily exist. In order for me to instantiate knowledge of that type of person, I have to know that they will exist.
A quick introduction to cause-effect pathways
A necessary cause is required for an effect to occur, while a sufficient cause is sufficient, but not necessary, for an effect to occur. There are also necessary and sufficient causes, a special category.
When discussing altruism, I will consider actions taken, consequences achieved, and the altruistic value[2] of the consequences. Lets treat actions as causes, and consequences as effects.
In addition, we can add to our model preconditions, a self-explanatory concept. For example, if your action has additional preconditions for its performance that are necessary in order for the action to cause a specific consequence, then those preconditions are necessary causes contributing to the consequence.
Using ontologies to make predictions
“Excuse me. Zero! Zero! There is a zero percent chance that your subprime losses will stop at five percent.” -The character Mark Baum in The Big Short
I will offer my belief that real-world prediction is not done with probabilities or likelihoods. Prediction outside of games of chance is done by matching an ontology to real-world knowledge of events. The events, whether actual or hypothetical, contain a past and a future, and the prediction is that portion of the cause-effect pathways and knowledge about entities that exist in the future.
Some doubts about subjective probability estimates
I doubt the legitimacy and meaningfulness of most subjective probability estimates. My doubts include:
betting is not a model of neutral interest in outcomes. Supposedly betting money on predictions motivates interest in accuracy, but I think it motivates interest in betting.
you can convey relative importance through probabilities just as easily as you can convey relative likelihoods. I think that’s the consequence or even the intent behind subjective probability estimates in many cases.
subjective probabilities might be used with an expected value calculation. I believe that humans have no reliable sense of judgement for relative subjective probabilities, and that expected value calculations that rely on any level of precision will yield expected values without any real significance.
To reframe the general idea behind matching ontologies to events, I will use Julia Galef’s well-known model of scouts and soldiers[3]:
scouts: collect knowledge and refine their ontology as they explore their world.
soldiers: adopt an (arbitrary) ontology and assert its match to the real-world.
An example of a climate prediction using belief filters and a small ontology
This example is pieces of internal dialog interspersed with pseudo-code. The dialog describes beliefs and questions. The example shows the build-up of knowledge by application of an ontology to research information. I gloss over all relationships except causal ones, relying on the meanings of ontology labels rather than making relationships like part-whole or meaning explicit.
The ontology content is mostly in the names of variables and what they can contain as values while the causal pathways are created by causal links. A variable looks like x? and → means causes. So x? → y? means some entity x causes some entity y.
Internal dialog while researching: Climate change research keeps bringing tipping point timing closer to the present as the required Global Average Surface Temperature (GAST) increases for tipping of natural systems drop in value. Climate tipping points are big and cause big bad consequences. So are there climate system tipping points happening now? Yes, the ice of the arctic melts a lot in the summer.
Knowledge instantiation:
small GAST increase? → Tipping_Points?
1.2C GAST Increase → Ice-free Arctic
Tipping points? → Big Bad Consequences?
Ice-free Arctic → Big Bad Consequences?
Internal dialog while researching: Discussions about tipping points could be filtered by media or government. So is there near term-catastrophic threat from tipping points? No, but the arctic melt is a positive feedback for other tipping points. The Ice-free Arctic: speeds permafrost melt that in turns leads to uninhabitable land and 1+ trillion tons of carbon potentially emitted; causes rapid release of methane hydrates or thermogenic methane from the East Siberian Arctic Shelf that in turn could cause a 1 degree Celsius GAST rise; accelerates Greenland melt that in turn could cause up to 7 meters of sea level rise globally. I estimate 1 meter of sea level rise destabilizes coastal populations and 2 meters of sea level rise will wipe some countries out.
Knowledge instantiation:
Tipping_points? → Big Bad Consequences?
Ice-free Arctic → Big Bad Consequences?
Ice-free Arctic → permafrost melt
Permafrost melt → Big Bad Consequences?
Permafrost melt → uninhabitable land and 1+ trillion tons carbon emitted
accelerating Greenland melt →Big Bad Consequences?
accelerating Greenland melt → <7m sea level rise possible
sea level rise → Big Bad Consequences?
Sea level rise → estimate:1+ meters sea level rise destabilizes coastal populations, 2+ meters wipes out some countries
The questions I ask here are fairly simple. Examples include:
What are near-term tipping points?
At what level of GAST rise do the tipping elements fall?
What are the consequences of a tipping element’s fall?
The skills that create the internal dialog include:
research skills: your research skills help you access credible and useful sources of information. For example, your skills can help you find and rely on various streams of information or notice when information out of the mainstream has entered your attention or learn about what makes a perspective surprising or potentially useful. Scout mindset is helpful to research skills.
critical thinking: your critical thinking skills help you determine the plausibility of what you find. For example, your critical thinking skills help you answer useful questions during your research and assess information for plausability. If you assume the information is credible but implausible, then you know to develop your background knowledge and return to make a further assessment.
background knowledge: your background knowledge helps you understand the technical basis of the content you access during research. For example, you can review a claim from a credible source for plausibility, not to reject the claim, but to find out whether you need more background knowledge to assess whether the claim is implausible.
inferences: your inferences help you develop a model of what you foresee. For example, you might forsee consequences that your source materials do not offer. If you use a cognitive aid (for example, extensive notes on an ontology, or an expert system), then you can capture inferences and perform them algorithmically. Otherwise, you rely on your own imperfect memory and judgement and will make mistakes.
The expression of those skills is left implicit (or pretended) in my example. The example is not meant to train you or impress you. With practice, scout mindset, and training (e.g., in academic research), you can improve the skills that give you foresight.
Keeping a small ontology or filtering for relevance
A prediction, by my definition, is a best-fit match of an ontology to a set of (hypothetical) events. Your beliefs let you filter out what you consider relevant in events. Correspondingly, you match relatively small ontologies to real-world events. The more you filter for relevance, the smaller the matching requirements for your ontology.[4] You need to filter incoming information about events for credibility or plausibility as well, but that is a separate issue involving research skills and critical thinking skills.
In my example above:
the internal dialog sections presumed my beliefs. For example, “Discussions about tipping points could be filtered by media or government. ” presupposes that organizations might filter discussions.
questions presupposed beliefs. For example, ” Is the threat from tipping points immediate?” presupposes that tipping points pose a threat.
These beliefs, the content of the dialog sections, defines the ontology that I instantiate with the information that I gather during research.
Superforecasters curate relevant ontologies using research and critical thinking skills
“A hundred percent he’s there. Ok, fine, ninety-five percent because I know certainty freaks you guys out, but it’s a hundred.” -Intelligence Analyst Maya in the movie Zero Dark Thirty
Superforecasters might be good at:
revising their ontologies quickly: by adding new entities or relationships and subtracting others.
matching information to a well-curated ontology: by looking for when ontology relationships match the real world.
I suspect that forecasters continue to use subjective probabilities because forecasters are not asked to justify their predictions by explaining their ontology. I understand another term for prediction is foresight. I expect superforecasters in their domains of expertise have good foresight.
I suspect that some domain experts asked to make predictions will develop larger ontologies than contain relevant information. Then, when they try to match the ontology against real-world information, they fail to collect only relevant data. These experts don’t believe much about what qualifies as relevant. The information is there and available, but they fail to make only the right connections.
I am by no means an expert in prediction or its skills. This is my idea of “common-sense” prediction analysis.
Acknowledge that your beliefs decide your morality
How you live your life can be consequential for other people, for the environment, for other species, and for future generations. The big question is “What consequences do you cause?”
I suggest that you discuss assertions about your consequences without the deniability that conclusions or hypotheses allow. Assert what you believe that you cause, at least to yourself. Don’t use the term consequence lightly. An outcome occurred in some correspondence to your action. A consequence occurred because your action caused it[5].
Ordinary pressures to hide beliefs
I believe that there are a number of pressures on discussions of beliefs about moral actions. Those pressures include:
social expectations and norms
your values and feelings
your attention and actual commitment
To be a good scout involved in intersubjective validation of your beliefs, don’t lie to others or yourself. If you start off by asserting what you believe, then you might be able to align your beliefs with the best evidence available. What you do by expressing your beliefs is:
ignore expectations of rationality or goodness-of-fit to your beliefs.
express your feelings and plausibly express your values as well.
commit to the discussion of your beliefs
provided that describing your unweighted beliefs is tolerable to those involved.
On way to make discussing your actual beliefs tolerable to yourself is to provide a means to constrain them. You can limit their applicability or revise some of their implications. Here are two short examples:
I start out believing that God creating the universe 6000 years ago. I learn enough about evolution and planetary history and theories of the start of the universe that I revise my belief. Yes, God created everything, and God created us, but did it 4+ billion years ago in some kind of primordial chemical soup.
I start out believing that climate change is a liberal hoax. I learn about climate science and recognize that climate researchers are mostly sincere. Now I believe that only the most liberal climate researchers make up how bad climate change will be, and I can tell those liberals apart from the truthful majority at the IPCC.
You might think these are examples of poor epistemics or partially updated beliefs, not something to encourage, but I disagree.
To simply make belief statements probabilistic implies that we are in doubt a lot, but with reference to unconstrained forms of our original beliefs. Constraining a belief, or chipping away at the details of the belief, is a better approach than chipping away at our confidence in the belief. We curate our ontologies and knowledge more carefully if we constrain beliefs rather than manipulate our confidence in beliefs.
Lets consider a less controversial example. Suppose some charity takes action to build a sewage plant in a small town somewhere. Cement availability becomes intermittent and prices rise, so the charity’s effectiveness drops. Subjective confidence levels assigned for the altruistic value of the charity’s actions correspondingly drop. However, constraining those actions to conditions in which cement is cheap and supply is assured can renew interest in the actions. Better to constrain belief in the action’s effectiveness than to reduce your confidence level without examining why. We’ll return to this shortly.
A summary of advantages of unweighted beliefs
At the risk of being redundant, let me summarize the advantages of unweighted beliefs. Preferring unweighted beliefs to weighted beliefs with epistemic confidence levels does offer advantages, including:
an unambiguous statement of what you hold true. An unweighted belief is not hedged. “I believe in god” gives me information that “I ninety-five percent believe in god.” does not[6].
an intuitive use of the concept of beliefs. Beliefs, as I think most people learned about them, are internal, sometimes rational, sometimes taken on faith, knowledge of what is true.
a distinction of believed truths from contingent conclusions or validated hypotheses, as noted earlier.
an opportunity to add constraints to your belief when evidence warrants.
a way to identify that you think true something that contradicts your epistemic best practices[7].
a way to distinguish hunches, intuitions, or other internal representations of beliefs from conclusions or hypotheses.
A way to name an assertion that you can revise or revoke rather than assign to a probability.
an opportunity to focus on the ontology or knowledge implicit in your belief.
The nature of the problem
Tuld:“So what you’re saying is that this has already happened.”
Peter:“Sort of.”
Tuld: “Sort of. And, Mr. Sullivan, what does your model say that that means for us here?”
- CEO John Tuld and Analyst Peter Sullivan in the movie Margin Call
A community of thinkers, intellectuals, and model citizens pursuing altruism is one that maintains the connection between:
thinking about beliefs about the consequences of one’s actions.
intersubjective validation of the consequences of one’s actions.
taking responsibility for changing the consequences of one’s actions for others.
The altruistic mission of the EA community is excellent and up to the task. I will propose a general principle for you, and that’s next.
Altruism valuations lack truth outside their limiting context
If you decide that your actions produce a consequence, then that is true in a context, and limited to that context. By implication, then, you have to assure that the context is present in order to satisfy your own belief that the action is producing the consequence that you believe it does.
An example of buying cement for a sewage-processing facility
For example, if you’re buying cement from a manufacturer to build a sewage-processing facility, and the payments are made in advance, and the supplier claims they shipped, but the building site didn’t receive the cement, then the charitable contributions toward the purchase of cement for the sewage treatment plant do not have the consequence you believe. Your belief in what you cause through your charity is made false. You don’t want to lose your effectiveness, so what do you do? You:
add the precondition that the cement reaches the building site to your list of what causes the sewage plant to be built. You bother because that precondition is no longer assured.
add actions to your causal model that are sufficient to ensure that cement deliveries complete. For example, possible actions might include:
to send a representative to the supplier’s warehouses.
to send employees to ride the freight train and verify delivery.
to create a new contract with the cement manufacturer that payment is only made after delivery.
Lets say that cement deliveries are only reliable if you take all those actions. In that case your beliefs about paying for cement change:
original belief: If we pay for cement, we can build our sewage plant.
revised belief: If we arrange a cement purchase, and send a representative to the supplier, and that representative rides the freight train carrying the cement to the construction location before we send payment for the cement, then we can build our sewage plant.“
You could do something more EAish involving subjective probabilities and cost effectiveness analyses, such as:
track the decline in subjective probabilities that your build site receives cement and take action below a specific probability, action just sufficient to raise subjective estimates of delivery efficiency above a certain percentage.
develop a cost-effectiveness model around cement deliveries. Ask several employees to supply subjective probabilities for future delivery frequencies based on proposed alternatives to correct delivery failures. Weight their responses (with their previously determined reliability coefficients) to choose a cost-effective means to solve delivery problems.
rather than send a representative to the supplier or have that person ride the freight train, just get a new contract with a supplier that lets you specify that payment is made after cement is freighted to the build site. If successful, it is the cheapest action to start.
If that process also results in a revised true belief, then good enough. There might be other advantages to your seemingly more expensive options that actually reduce costs later[8].
You can take your causal modeling of your actions further[9]. You can:
look further upstream at causes of the problem. For example, explore what caused the need for a charity to pay for a sewage treatment plant
assess preconditions of preconditions of your options of action. Maybe your employees wont ride the freight train without hazard pay.
Look at your other actions to see if they cause the undesirable preconditions that make your charitable effort necessary. Are you contributing to the reasons that the town cannot afford a sewage treatment plant?[10]
Your altruism is limited to the contexts where you have positive consequences
Yes, your actions only work to bring about consequences in specific contexts in which certain preconditions hold. You cannot say much about your altruism outside those contexts unless you consider your actions outside those contexts. Effective altruists leverage money to achieve altruism, but that’s not possible in all areas of life or meaningful in all opportunities for altruistic action. This has implications that we can consider through some thought experiments.
Scoring altruistic and selfish consequences of actions
“Is that figure right?” -Manager Sam Rogers in the movie Margin Call
Actions can be both good and evil in their consequences for yourself or others. To explore this, lets consider a few algebra tools that let you score, rank, scale, and compare the altruistic value of your actions in a few different ways. I think the following four-factor model will do.
Here’s a simple system of scoring actions with respect to their morality, made up of a:
benefit score
harm score
self-benefit score
self-harm score
Lets use a tuple of (benefit score, harm score, self-benefit score, self-harm score) to quantify an action in terms of its consequences. For my purposes, I will consider the action of saving a life to have the maximum benefit and self-benefit and give it a score of 10. Analogously, the maximum harm and self-harm score is also 10. All other scores relative to that maximum of 10 are made up.
A Thought Experiment: Two rescues and a donation
Three altruistic scenarios
The Fire Rescue: Consider a stranger unknown to anyone in a neighborhood. She walks into a burning building and saves two children inside (creating 10 points of benefit) but dies herself from smoke inhalation (inflicting 10 points harm on herself). She intended well but caused the two children she rescued to suffer smoke inhalation (inflicting a point of harm on each) because of her poor rescue technique, carrying them high on her shoulders and stumbling around for awhile coughing.
The Drowning Rescue: Consider the same person walking a different path. She spots a small boy drowning in a flooded gutter, and while rescuing the boy (creating 10 points of benefit), ruins her favorite boots (inflicting 1 point of self-harm). She takes the boy home, and gets a positive reputation in town for her action (creating 2 points of self-benefit).
The Bednet Donation: Consider the same woman after upping her contribution to the Against Malaria foundation. She funds distribution of 200 bednets (creating 5 points of benefit each for 200 people), does no one else any harm, doesn’t benefit herself at all , but causes self-harm with a small loss of discretionary spending (causing herself 0.2 points of self-harm).
In tuple form:
Fire Rescue: (20,2,0,10)
Drowning Rescue: (10,0,2,1)
Bednet Donation: (1000,0,0,0.2)
Distance between Fire Rescue and Drowning Rescue: (10^2+2^2+2^2+9^2)^.5^=189^.5=13.7
Distance between Bednet Donation and Fire Rescue: (980^2+2^2+9.8^2)^.5^=980.1
Subtraction of Drowning Rescue from Fire Rescue: (20,2,0,10) - (10,0,2,1) = (10,2,-2,9)
Subtraction of Fire Rescue from Bednet Donation: (1000,0,0,0.2) – (20,2,0,10) = (980,-2,0,-9.8)
The distances between actions measures how close together the rescues are in terms of their scores. For example, the fire rescue is 13.7 points away from the drowning rescue but 980 points away from the bednet donation.
The subtractions of actions show the differences between actions per factor. For example, the bednet donation is 980 points of benefit higher than the Fire Rescue.
Some legitimate concerns about this thought experiment
Disagreements over these thought experiments could include:
the individual scores are subjective and relative. Is preventing a death (a benefit score of 10) really worth two cases of malaria (a benefit score of 5 * 2)?
the individual scores conflate incompatible types of altruism. Is death comparable to ruining a pair of boots?
The causal models are debatable. Was the large donation actually sufficient to generate new bednet deliveries to Uganda? In the fire rescue thought experiment, are we sure the children suffered from smoke inhalation because of how the rescuer carried them out?
Some scores are not possible in practice. For example, is it possible to have null benefit or harm ever?
If you have these concerns, then you can bring them up to improve your use of such a thought experiment. For example, you could disallow absolute 0 values of any individual score.
The influence of beliefs about causation on your sense of responsibility
EA folks complain that they suffer depression or frustration when they compare their plans for personal spending against the good they could do by donating their spending money. Let’s explore this issue briefly.
Consider the woman who donated to Against Malaria for 200 bednets, with a 1000 points of altruistic benefit and 0.2 points of self-harm associated.. Suppose she needed a vacation, and decided to withhold the donation money and spend it on a week-long spa vacation, gaining 3 points of self-help. If she felt guilty for doing so, is that because she believed that she caused 200 malaria infections or failed to prevent them from happening by her choice of action?
Donate to Against Malaria: (1000, 0, 0, 0.2 )
Spend donation money on spa trip to rejuvenate: (0,0,3,0)
Spend donation money on spa trip and undesirably cause 200 malaria infections: (0,1000,3,0)
The question comes down to what she believes about what she causes. Does the spa trip deserve a scoring of (0,0,3,0), or of (0,1000,3,0)? Did she cause 200 people to contract malaria or not, when she took her spa trip?
All this example illustrates is our limitations in assigning ourselves a causal role in events. In the earlier example, if you caused 200 people malaria, you have reason to feel guilty. On the other hand, spending some money on a vacation doesn’t give anyone malaria, and certainly isn’t cause for guilt, all other things equal.
I have emphasized beliefs throughout this red team exercise because:
beliefs assert what you consider to be consequences of your actions.
beliefs decide what goes onto your own ledger of your good, evil, or selfish actions.
beliefs guide your decisions and feelings in many situations of moral significance.
The flow of time carrying your actions defines a single path of what exists
Your actions precede their consequences. What you believe that you cause is found in the consequences of your actions. You can look forward in time and predict your future consequences. You can look at diverging tracks of hypothetical futures preceded by actions you could have taken but did not take. However, those hypothetical futures are nothing more than beliefs.
On those hypothetical tracks, you can perceive the consequences of what you could have done, like the woman in the fire rescue who elected to call 911 and wait for the fire truck to arrive. Later on, she’s haunted by what would have happened if she had rushed into the building and carried out two living healthy children instead of letting them die in a fire. What would have happened is not an alternate reality though. It is just a belief. The poor woman, haunted by guilt, will never know if her belief is true.
Anyone suffering regret or enjoying relief about what they didn’t do or did do is responding to their beliefs, not some alternate set of facts or escaped reality. Treat ideas of alternative past, present or future events as beliefs.
Conclusion: This was a red-team criticism, not just an exploration of ideas
If I did my job here, then you have understood:
some criticisms of use of betting odds and subjective probabilities in EA.
some suggestions for why and how to use unweighted beliefs.
a model of preconditions, actions, and consequences.
a model of only one path through time and experience in contrast with many paths in belief.
A renewal of unqualified use of assertions that indicate unweighted belief will make it easier for you to apply useful critical thinking and prediction skills.
I offered a few recommendations, including that you:
remember that beliefs are more or different than conclusions and hypotheses.
qualify your conclusions and hypotheses as such.
state your unweighted beliefs with confidence.
challenge beliefs to learn the particulars of their implicit ontology or knowledge.
match causal pathways and ontological lists to make predictions.
qualify a prediction with a discussion of the causal pathway it matches or the ontology it presumes, instead of with a prediction probability.
model preconditions, actions, and consequences to achieve your goals.
explore the context of any actions of yours that you want to perform.
I attribute your assessment of your action’s consequences to your beliefs, not your rationality, regardless of how you formed that assessment and any confidence that you claim in it.
I believe that you can succeed to the benefit of others. Good luck!
In fact any time that you can affect causally. At least, so far, I have not heard any direct arguments for the immorality of not causing beings to exist.
Altruistic value is the value to others of some effect, outcome, or consequence. I call the value altruistic because I consider it with respect to what is good for others that experience an effect, outcome, or consequence. In different contexts, the specifics of altruistic value will be very different. For example, EA folks talk about measuring altruistic value in QALY’s when evaluating some kinds of interventions. Another example is that I think of my altruistic value as present in how I cause other’s work to be more efficient in time or energy required. There is the possibility of anti-altruistic value, or harm to others, or evil consequences. Some actions have no altruistic value. They neither contribute to nor subtract from the well-being others. Those actions have null altruistic value.
I don’t believe that the mind uses an inference system comparable to production rules or forward-chaining or other AI algorithms. However, I think that matching is part of what we do when we gather information or make predictions or understand something. I don’t want to defend that position in this critique.
Of course, no matter what confidence value (2%, 95%, very, kinda, almost certainly) that you give a assertion, if you don’t believe it, then it’s not your belief. Conversely, if you do believe an assertion, but assert a low confidence in it, then it is still your belief.
beliefs about cause-effects are not scientific assertions about the truth of those cause-effects. Instead, they have all the legitimacy of anything else you believe, including when you assign a subjective probability to that belief.
The narrative details determine what’s cost effective, but a plausible alternative is that the area is subject to theft from freight trains, and an employee riding the freight train could identify the problem early, or plausibly prevent the thefts, for example with bribes given to the thieves.
If you’re interested, there are various methods of causal analysis without probabilities. You can find them discussed in a manual from the company behind Flying Logic software, or in books about the Goldratt methods of problem-solving, and perhaps in other places.
It would be difficult to learn that your actions in your career increase poverty in or force migration from a country receiving financial support for an EA charity, particularly when you also contribute a good chunk of your income to that charity. If you work for some banks or finance institutions, then it is plausible that you are causing some of the problem that you intend to correct.
There could be intuitive matching algorithms put to use that let a person quantify the match they determine between the various causal pathways in their ontology and real-world events. I am just speculating, but those algorithms could serve the same purpose as subjective probabilities in forecasting. The study of case-based reasoning could offer some insights.
Below is a shortened version of the original post submitted for the competition. See the post content for newer material.
--------
TL;DR
Keep and use the word belief in your vocabulary distinct from your use of hypothesis or conclusion.
Unweighted beliefs are useful distinctions to employ in your life.
Constrain beliefs rather than decrease your epistemic confidence in them.
Challenge beliefs to engage with underlying (true) knowledge.
Take additional actions to ensure preconditions for the consequences of your actions.
Match ontologies with real-world information to create knowledge used to make predictions.
Use an algebraic scoring of altruism and selfishness of actions to develop some simple thought experiments.
My red team critique is that EA folks can effectively rely on unweighted beliefs where they now prefer to use Bayesian probabilities.
Introduction: Effective Altruists want to be altruistic
EA community members want to improve the welfare of others through allocation of earnings or work toward charitable causes. Effective Altruists just want to do good better.
There’s a couple ways to know that your actions are actually altruistic:
Believe that your actions are altruistic.
Confirm your actions are altruistic before you believe it.
I believe that EA folks confirm the consequences of their altruistic actions in the near term. For example, they might rely on:
expert opinion
careful research
financial audits
cost-effectiveness models
big-picture analyses
and plausibly other processes to ensure that their charities do what they claim.
The essentials of my red team critique
Bayesian subjective probabilities don’t substitute for unweighted beliefs
The community relies on Bayesian subjective probabilities when I would simply rely on unweighted beliefs. Unweighted beliefs let you represent assertions that you consider true.
Why can’t Bayesian probabilities substitute for all statements of belief? Because:
Humans do not consciously access all their beliefs or reliably recall formative evidence for every belief that they can articulate. I consider this self-evident.
Beliefs are important to human evaluation of morality, especially felt beliefs. I will explore this in a couple of thought experiments about EA guilt.
Beliefs can represent acceptance of a conclusion or validation of a hypothesis, but sometimes they appear as seemingly irrational feelings or intuitions.
Any strong assertion that you make quickly is a belief. For example, as a medic, you could say “Look, this epipen will help!” to someone guarding their anaphylactic friend from you.
Distinguish beliefs from conclusions from hypotheses
Use the concept of an unweighted belief in discussion of a person’s knowledge. Distinguish beliefs from conclusions from hypotheses, like:
Belief Qualifiers: “I believe that X.”, “X, or so I believe.”, “X.”
Conclusion Qualifiers: “Therefore X.”, “I conclude that X.”, “X is my conclusion.”
Hypothesis Qualifiers: “I hypothesize that X.”, “X, with 99% confidence.”
I don’t mean to limit your options. These are just examples.
Challenge the ontological knowledge implicit in a belief
If I say, “I believe that buying bednets is not effective in preventing malaria.” you can respond:
“Why do you believe that?”
“What led you to that conclusion?”
“Is that your theory?”
“Based on what evidence?”
I believe that you should elicit ontology and knowledge from whom you challenge, rather than letting the discussion devolve into exchanges of probability estimates.
Partial list of relationship types in ontologies
You will want information about entities that participate in relationships like:
part-whole relationships: an entity is part of another entity
meaning relationships: a defining, entailment, or referent relationship
causal relationships: a causal relationship, necessary, sufficient, or both
set-subset relationships: a relationship between sets
I will focus on causal relationships in a few examples below. I gloss over the other types of relationships though they deserve a full treatment.
A quick introduction to ontologies and knowledge (graphs)
An ontology is a list of labels for things that could exist and some relationships between them. A knowledge graph instantiates the labels of the relationship. The term knowledge graph is confusing, so I will use the word knowledge as a shorthand. For example:
Ontology: Tipping_Points cause Bad_Happenings, where Tipping_Points and Bad_Happenings are the labels, and the relationship is causes.
Knowledge: “Melting Arctic Ice causes Ocean Heating At the North Pole.” where Melting Arctic Ice instantiates Tipping_Points and Ocean Heating At The North Pole instantiates Bad_Happenings.
I am discussing beliefs about the possible existence of things or possible causes of events in this red team. So do EA folks. Longtermists in Effective Altruism believe in making people happy and making happy people.” Separate from possibilities or costs of longtermist futures is the concept of moral circles, meant to contain beings with moral status. As you widen your moral circle, you include more beings of more types in your moral calculations. These beings occupy a place in an ontology of beings said to exist at some point in present or future time[1]. Longtermists believe that future people have moral status. An open question is whether they actually believe that those future people will necessarily exist. In order for me to instantiate knowledge of that type of person, I have to know that they will exist.
A quick introduction to cause-effect pathways
A necessary cause is required for an effect to occur, while a sufficient cause is sufficient, but not necessary, for an effect to occur. There are also necessary and sufficient causes, a special category.
When discussing altruism, I will consider actions taken, consequences achieved, and the altruistic value[2] of the consequences. Lets treat actions as causes, and consequences as effects.
In addition, we can add to our model preconditions, a self-explanatory concept. For example, if your action has additional preconditions for its performance that are necessary in order for the action to cause a specific consequence, then those preconditions are necessary causes contributing to the consequence.
Using ontologies to make predictions
I will offer my belief that real-world prediction is not done with probabilities or likelihoods. Prediction outside of games of chance is done by matching an ontology to real-world knowledge of events. The events, whether actual or hypothetical, contain a past and a future, and the prediction is that portion of the cause-effect pathways and knowledge about entities that exist in the future.
Some doubts about subjective probability estimates
I doubt the legitimacy and meaningfulness of most subjective probability estimates. My doubts include:
betting is not a model of neutral interest in outcomes. Supposedly betting money on predictions motivates interest in accuracy, but I think it motivates interest in betting.
you can convey relative importance through probabilities just as easily as you can convey relative likelihoods. I think that’s the consequence or even the intent behind subjective probability estimates in many cases.
subjective probabilities might be used with an expected value calculation. I believe that humans have no reliable sense of judgement for relative subjective probabilities, and that expected value calculations that rely on any level of precision will yield expected values without any real significance.
To reframe the general idea behind matching ontologies to events, I will use Julia Galef’s well-known model of scouts and soldiers[3]:
scouts: collect knowledge and refine their ontology as they explore their world.
soldiers: adopt an (arbitrary) ontology and assert its match to the real-world.
An example of a climate prediction using belief filters and a small ontology
This example is pieces of internal dialog interspersed with pseudo-code. The dialog describes beliefs and questions. The example shows the build-up of knowledge by application of an ontology to research information. I gloss over all relationships except causal ones, relying on the meanings of ontology labels rather than making relationships like part-whole or meaning explicit.
The ontology content is mostly in the names of variables and what they can contain as values while the causal pathways are created by causal links. A variable looks like x? and → means causes. So x? → y? means some entity x causes some entity y.
Internal dialog while researching: Climate change research keeps bringing tipping point timing closer to the present as the required Global Average Surface Temperature (GAST) increases for tipping of natural systems drop in value. Climate tipping points are big and cause big bad consequences. So are there climate system tipping points happening now? Yes, the ice of the arctic melts a lot in the summer.
Knowledge instantiation:
small GAST increase? → Tipping_Points?
Tipping points? → Big Bad Consequences?
Ice-free Arctic → Big Bad Consequences?
Internal dialog while researching: Discussions about tipping points could be filtered by media or government. So is there near term-catastrophic threat from tipping points? No, but the arctic melt is a positive feedback for other tipping points. The Ice-free Arctic: speeds permafrost melt that in turns leads to uninhabitable land and 1+ trillion tons of carbon potentially emitted; causes rapid release of methane hydrates or thermogenic methane from the East Siberian Arctic Shelf that in turn could cause a 1 degree Celsius GAST rise; accelerates Greenland melt that in turn could cause up to 7 meters of sea level rise globally. I estimate 1 meter of sea level rise destabilizes coastal populations and 2 meters of sea level rise will wipe some countries out.
Knowledge instantiation:
Tipping_points? → Big Bad Consequences?
Ice-free Arctic → permafrost melt
Permafrost melt → Big Bad Consequences?
Permafrost melt → uninhabitable land and 1+ trillion tons carbon emitted
Ice-free Arctic → methane hydrate/thermogenic methane emissions
methane hydrate/thermogenic methane emissions → Big Bad Consequences?
methane hydrate/thermogenic methane emissions → 1C GAST rise
Ice-free Arctic → accelerating Greenland melt
accelerating Greenland melt → Big Bad Consequences?
accelerating Greenland melt → <7m sea level rise possible
sea level rise → Big Bad Consequences?
Sea level rise → estimate:1+ meters sea level rise destabilizes coastal populations, 2+ meters wipes out some countries
The questions I ask here are fairly simple. Examples include:
What are near-term tipping points?
At what level of GAST rise do the tipping elements fall?
What are the consequences of a tipping element’s fall?
The skills that create the internal dialog include:
research skills: your research skills help you access credible and useful sources of information. For example, your skills can help you find and rely on various streams of information or notice when information out of the mainstream has entered your attention or learn about what makes a perspective surprising or potentially useful. Scout mindset is helpful to research skills.
critical thinking: your critical thinking skills help you determine the plausibility of what you find. For example, your critical thinking skills help you answer useful questions during your research and assess information for plausability. If you assume the information is credible but implausible, then you know to develop your background knowledge and return to make a further assessment.
background knowledge: your background knowledge helps you understand the technical basis of the content you access during research. For example, you can review a claim from a credible source for plausibility, not to reject the claim, but to find out whether you need more background knowledge to assess whether the claim is implausible.
inferences: your inferences help you develop a model of what you foresee. For example, you might forsee consequences that your source materials do not offer. If you use a cognitive aid (for example, extensive notes on an ontology, or an expert system), then you can capture inferences and perform them algorithmically. Otherwise, you rely on your own imperfect memory and judgement and will make mistakes.
The expression of those skills is left implicit (or pretended) in my example. The example is not meant to train you or impress you. With practice, scout mindset, and training (e.g., in academic research), you can improve the skills that give you foresight.
Keeping a small ontology or filtering for relevance
A prediction, by my definition, is a best-fit match of an ontology to a set of (hypothetical) events. Your beliefs let you filter out what you consider relevant in events. Correspondingly, you match relatively small ontologies to real-world events. The more you filter for relevance, the smaller the matching requirements for your ontology.[4] You need to filter incoming information about events for credibility or plausibility as well, but that is a separate issue involving research skills and critical thinking skills.
In my example above:
the internal dialog sections presumed my beliefs. For example, “Discussions about tipping points could be filtered by media or government. ” presupposes that organizations might filter discussions.
questions presupposed beliefs. For example, ” Is the threat from tipping points immediate?” presupposes that tipping points pose a threat.
These beliefs, the content of the dialog sections, defines the ontology that I instantiate with the information that I gather during research.
Superforecasters curate relevant ontologies using research and critical thinking skills
Superforecasters might be good at:
revising their ontologies quickly: by adding new entities or relationships and subtracting others.
matching information to a well-curated ontology: by looking for when ontology relationships match the real world.
I suspect that forecasters continue to use subjective probabilities because forecasters are not asked to justify their predictions by explaining their ontology. I understand another term for prediction is foresight. I expect superforecasters in their domains of expertise have good foresight.
I suspect that some domain experts asked to make predictions will develop larger ontologies than contain relevant information. Then, when they try to match the ontology against real-world information, they fail to collect only relevant data. These experts don’t believe much about what qualifies as relevant. The information is there and available, but they fail to make only the right connections.
I am by no means an expert in prediction or its skills. This is my idea of “common-sense” prediction analysis.
Acknowledge that your beliefs decide your morality
How you live your life can be consequential for other people, for the environment, for other species, and for future generations. The big question is “What consequences do you cause?”
I suggest that you discuss assertions about your consequences without the deniability that conclusions or hypotheses allow. Assert what you believe that you cause, at least to yourself. Don’t use the term consequence lightly. An outcome occurred in some correspondence to your action. A consequence occurred because your action caused it[5].
Ordinary pressures to hide beliefs
I believe that there are a number of pressures on discussions of beliefs about moral actions. Those pressures include:
social expectations and norms
your values and feelings
your attention and actual commitment
To be a good scout involved in intersubjective validation of your beliefs, don’t lie to others or yourself. If you start off by asserting what you believe, then you might be able to align your beliefs with the best evidence available. What you do by expressing your beliefs is:
ignore expectations of rationality or goodness-of-fit to your beliefs.
express your feelings and plausibly express your values as well.
commit to the discussion of your beliefs
provided that describing your unweighted beliefs is tolerable to those involved.
On way to make discussing your actual beliefs tolerable to yourself is to provide a means to constrain them. You can limit their applicability or revise some of their implications. Here are two short examples:
I start out believing that God creating the universe 6000 years ago. I learn enough about evolution and planetary history and theories of the start of the universe that I revise my belief. Yes, God created everything, and God created us, but did it 4+ billion years ago in some kind of primordial chemical soup.
I start out believing that climate change is a liberal hoax. I learn about climate science and recognize that climate researchers are mostly sincere. Now I believe that only the most liberal climate researchers make up how bad climate change will be, and I can tell those liberals apart from the truthful majority at the IPCC.
You might think these are examples of poor epistemics or partially updated beliefs, not something to encourage, but I disagree.
To simply make belief statements probabilistic implies that we are in doubt a lot, but with reference to unconstrained forms of our original beliefs. Constraining a belief, or chipping away at the details of the belief, is a better approach than chipping away at our confidence in the belief. We curate our ontologies and knowledge more carefully if we constrain beliefs rather than manipulate our confidence in beliefs.
Lets consider a less controversial example. Suppose some charity takes action to build a sewage plant in a small town somewhere. Cement availability becomes intermittent and prices rise, so the charity’s effectiveness drops. Subjective confidence levels assigned for the altruistic value of the charity’s actions correspondingly drop. However, constraining those actions to conditions in which cement is cheap and supply is assured can renew interest in the actions. Better to constrain belief in the action’s effectiveness than to reduce your confidence level without examining why. We’ll return to this shortly.
A summary of advantages of unweighted beliefs
At the risk of being redundant, let me summarize the advantages of unweighted beliefs. Preferring unweighted beliefs to weighted beliefs with epistemic confidence levels does offer advantages, including:
an unambiguous statement of what you hold true. An unweighted belief is not hedged. “I believe in god” gives me information that “I ninety-five percent believe in god.” does not[6].
an intuitive use of the concept of beliefs. Beliefs, as I think most people learned about them, are internal, sometimes rational, sometimes taken on faith, knowledge of what is true.
a distinction of believed truths from contingent conclusions or validated hypotheses, as noted earlier.
an opportunity to add constraints to your belief when evidence warrants.
a way to identify that you think true something that contradicts your epistemic best practices[7].
a way to distinguish hunches, intuitions, or other internal representations of beliefs from conclusions or hypotheses.
A way to name an assertion that you can revise or revoke rather than assign to a probability.
an opportunity to focus on the ontology or knowledge implicit in your belief.
The nature of the problem
A community of thinkers, intellectuals, and model citizens pursuing altruism is one that maintains the connection between:
thinking about beliefs about the consequences of one’s actions.
intersubjective validation of the consequences of one’s actions.
taking responsibility for changing the consequences of one’s actions for others.
The altruistic mission of the EA community is excellent and up to the task. I will propose a general principle for you, and that’s next.
Altruism valuations lack truth outside their limiting context
If you decide that your actions produce a consequence, then that is true in a context, and limited to that context. By implication, then, you have to assure that the context is present in order to satisfy your own belief that the action is producing the consequence that you believe it does.
An example of buying cement for a sewage-processing facility
For example, if you’re buying cement from a manufacturer to build a sewage-processing facility, and the payments are made in advance, and the supplier claims they shipped, but the building site didn’t receive the cement, then the charitable contributions toward the purchase of cement for the sewage treatment plant do not have the consequence you believe. Your belief in what you cause through your charity is made false. You don’t want to lose your effectiveness, so what do you do? You:
add the precondition that the cement reaches the building site to your list of what causes the sewage plant to be built. You bother because that precondition is no longer assured.
add actions to your causal model that are sufficient to ensure that cement deliveries complete. For example, possible actions might include:
to send a representative to the supplier’s warehouses.
to send employees to ride the freight train and verify delivery.
to create a new contract with the cement manufacturer that payment is only made after delivery.
Lets say that cement deliveries are only reliable if you take all those actions. In that case your beliefs about paying for cement change:
original belief: If we pay for cement, we can build our sewage plant.
revised belief: If we arrange a cement purchase, and send a representative to the supplier, and that representative rides the freight train carrying the cement to the construction location before we send payment for the cement, then we can build our sewage plant.“
You could do something more EAish involving subjective probabilities and cost effectiveness analyses, such as:
track the decline in subjective probabilities that your build site receives cement and take action below a specific probability, action just sufficient to raise subjective estimates of delivery efficiency above a certain percentage.
develop a cost-effectiveness model around cement deliveries. Ask several employees to supply subjective probabilities for future delivery frequencies based on proposed alternatives to correct delivery failures. Weight their responses (with their previously determined reliability coefficients) to choose a cost-effective means to solve delivery problems.
rather than send a representative to the supplier or have that person ride the freight train, just get a new contract with a supplier that lets you specify that payment is made after cement is freighted to the build site. If successful, it is the cheapest action to start.
If that process also results in a revised true belief, then good enough. There might be other advantages to your seemingly more expensive options that actually reduce costs later[8].
You can take your causal modeling of your actions further[9]. You can:
look further upstream at causes of the problem. For example, explore what caused the need for a charity to pay for a sewage treatment plant
assess preconditions of preconditions of your options of action. Maybe your employees wont ride the freight train without hazard pay.
Look at your other actions to see if they cause the undesirable preconditions that make your charitable effort necessary. Are you contributing to the reasons that the town cannot afford a sewage treatment plant?[10]
Your altruism is limited to the contexts where you have positive consequences
Yes, your actions only work to bring about consequences in specific contexts in which certain preconditions hold. You cannot say much about your altruism outside those contexts unless you consider your actions outside those contexts. Effective altruists leverage money to achieve altruism, but that’s not possible in all areas of life or meaningful in all opportunities for altruistic action. This has implications that we can consider through some thought experiments.
Scoring altruistic and selfish consequences of actions
Actions can be both good and evil in their consequences for yourself or others. To explore this, lets consider a few algebra tools that let you score, rank, scale, and compare the altruistic value of your actions in a few different ways. I think the following four-factor model will do.
Here’s a simple system of scoring actions with respect to their morality, made up of a:
benefit score
harm score
self-benefit score
self-harm score
Lets use a tuple of (benefit score, harm score, self-benefit score, self-harm score) to quantify an action in terms of its consequences. For my purposes, I will consider the action of saving a life to have the maximum benefit and self-benefit and give it a score of 10. Analogously, the maximum harm and self-harm score is also 10. All other scores relative to that maximum of 10 are made up.
A Thought Experiment: Two rescues and a donation
Three altruistic scenarios
The Drowning Rescue: Consider the same person walking a different path. She spots a small boy drowning in a flooded gutter, and while rescuing the boy (creating 10 points of benefit), ruins her favorite boots (inflicting 1 point of self-harm). She takes the boy home, and gets a positive reputation in town for her action (creating 2 points of self-benefit).
The Bednet Donation: Consider the same woman after upping her contribution to the Against Malaria foundation. She funds distribution of 200 bednets (creating 5 points of benefit each for 200 people), does no one else any harm, doesn’t benefit herself at all , but causes self-harm with a small loss of discretionary spending (causing herself 0.2 points of self-harm).
In tuple form:
Fire Rescue: (20,2,0,10)
Drowning Rescue: (10,0,2,1)
Bednet Donation: (1000,0,0,0.2)
Distance between Bednet Donation and Fire Rescue: (980^2+2^2+9.8^2)^.5^=980.1
Subtraction of Drowning Rescue from Fire Rescue: (20,2,0,10) - (10,0,2,1) = (10,2,-2,9)
Subtraction of Fire Rescue from Bednet Donation: (1000,0,0,0.2) – (20,2,0,10) = (980,-2,0,-9.8)
The distances between actions measures how close together the rescues are in terms of their scores. For example, the fire rescue is 13.7 points away from the drowning rescue but 980 points away from the bednet donation.
The subtractions of actions show the differences between actions per factor. For example, the bednet donation is 980 points of benefit higher than the Fire Rescue.
Some legitimate concerns about this thought experiment
Disagreements over these thought experiments could include:
the individual scores are subjective and relative. Is preventing a death (a benefit score of 10) really worth two cases of malaria (a benefit score of 5 * 2)?
the individual scores conflate incompatible types of altruism. Is death comparable to ruining a pair of boots?
The causal models are debatable. Was the large donation actually sufficient to generate new bednet deliveries to Uganda? In the fire rescue thought experiment, are we sure the children suffered from smoke inhalation because of how the rescuer carried them out?
Some scores are not possible in practice. For example, is it possible to have null benefit or harm ever?
If you have these concerns, then you can bring them up to improve your use of such a thought experiment. For example, you could disallow absolute 0 values of any individual score.
The influence of beliefs about causation on your sense of responsibility
EA folks complain that they suffer depression or frustration when they compare their plans for personal spending against the good they could do by donating their spending money. Let’s explore this issue briefly.
Consider the woman who donated to Against Malaria for 200 bednets, with a 1000 points of altruistic benefit and 0.2 points of self-harm associated.. Suppose she needed a vacation, and decided to withhold the donation money and spend it on a week-long spa vacation, gaining 3 points of self-help. If she felt guilty for doing so, is that because she believed that she caused 200 malaria infections or failed to prevent them from happening by her choice of action?
Donate to Against Malaria: (1000, 0, 0, 0.2 )
Spend donation money on spa trip to rejuvenate: (0,0,3,0)
Spend donation money on spa trip and undesirably cause 200 malaria infections: (0,1000,3,0)
The question comes down to what she believes about what she causes. Does the spa trip deserve a scoring of (0,0,3,0), or of (0,1000,3,0)? Did she cause 200 people to contract malaria or not, when she took her spa trip?
All this example illustrates is our limitations in assigning ourselves a causal role in events. In the earlier example, if you caused 200 people malaria, you have reason to feel guilty. On the other hand, spending some money on a vacation doesn’t give anyone malaria, and certainly isn’t cause for guilt, all other things equal.
I have emphasized beliefs throughout this red team exercise because:
beliefs assert what you consider to be consequences of your actions.
beliefs decide what goes onto your own ledger of your good, evil, or selfish actions.
beliefs guide your decisions and feelings in many situations of moral significance.
The flow of time carrying your actions defines a single path of what exists
Your actions precede their consequences. What you believe that you cause is found in the consequences of your actions. You can look forward in time and predict your future consequences. You can look at diverging tracks of hypothetical futures preceded by actions you could have taken but did not take. However, those hypothetical futures are nothing more than beliefs.
On those hypothetical tracks, you can perceive the consequences of what you could have done, like the woman in the fire rescue who elected to call 911 and wait for the fire truck to arrive. Later on, she’s haunted by what would have happened if she had rushed into the building and carried out two living healthy children instead of letting them die in a fire. What would have happened is not an alternate reality though. It is just a belief. The poor woman, haunted by guilt, will never know if her belief is true.
Anyone suffering regret or enjoying relief about what they didn’t do or did do is responding to their beliefs, not some alternate set of facts or escaped reality. Treat ideas of alternative past, present or future events as beliefs.
Conclusion: This was a red-team criticism, not just an exploration of ideas
If I did my job here, then you have understood:
some criticisms of use of betting odds and subjective probabilities in EA.
some suggestions for why and how to use unweighted beliefs.
a model of preconditions, actions, and consequences.
a model of only one path through time and experience in contrast with many paths in belief.
A renewal of unqualified use of assertions that indicate unweighted belief will make it easier for you to apply useful critical thinking and prediction skills.
I offered a few recommendations, including that you:
remember that beliefs are more or different than conclusions and hypotheses.
qualify your conclusions and hypotheses as such.
state your unweighted beliefs with confidence.
challenge beliefs to learn the particulars of their implicit ontology or knowledge.
match causal pathways and ontological lists to make predictions.
qualify a prediction with a discussion of the causal pathway it matches or the ontology it presumes, instead of with a prediction probability.
model preconditions, actions, and consequences to achieve your goals.
explore the context of any actions of yours that you want to perform.
I attribute your assessment of your action’s consequences to your beliefs, not your rationality, regardless of how you formed that assessment and any confidence that you claim in it.
I believe that you can succeed to the benefit of others. Good luck!
Bibliography
Alciem LLC. Flying Logic User Guide v.3.0.9 . Arciem LLC. 2020.
Cawsey, Allison, The Essence of Artificial Intelligence. Pearson Education Limited, 1998.
Daub, Adrian, What Tech Calls Thinking. FSG Originals, 2020.
Fisher, Roger, and William Ury. Getting to Yes. Boston: Houghton Mifflin, 1981.
Galef, Julia, The Scout Mindset. Penguin, 2021.
Goldratt, Eliyahu, M., et al, The Goal. North River Press, Inc. 2012.
Johnstone, Albert A., Rationalized Epistemology. SUNY Press, 1991.
Kowalski, Robert., Computational Logic and Human Thinking. Cambridge University Press, 2011.
Pearl, Judea and Dana Mackenzie, The Book of Why. Hachette UK, 2018.
In fact any time that you can affect causally. At least, so far, I have not heard any direct arguments for the immorality of not causing beings to exist.
Altruistic value is the value to others of some effect, outcome, or consequence. I call the value altruistic because I consider it with respect to what is good for others that experience an effect, outcome, or consequence. In different contexts, the specifics of altruistic value will be very different. For example, EA folks talk about measuring altruistic value in QALY’s when evaluating some kinds of interventions. Another example is that I think of my altruistic value as present in how I cause other’s work to be more efficient in time or energy required. There is the possibility of anti-altruistic value, or harm to others, or evil consequences. Some actions have no altruistic value. They neither contribute to nor subtract from the well-being others. Those actions have null altruistic value.
Sorry, Julia, if I butchered your concepts.
I don’t believe that the mind uses an inference system comparable to production rules or forward-chaining or other AI algorithms. However, I think that matching is part of what we do when we gather information or make predictions or understand something. I don’t want to defend that position in this critique.
Of course, no matter what confidence value (2%, 95%, very, kinda, almost certainly) that you give a assertion, if you don’t believe it, then it’s not your belief. Conversely, if you do believe an assertion, but assert a low confidence in it, then it is still your belief.
And no, I am not interested in betting in general.
beliefs about cause-effects are not scientific assertions about the truth of those cause-effects. Instead, they have all the legitimacy of anything else you believe, including when you assign a subjective probability to that belief.
The narrative details determine what’s cost effective, but a plausible alternative is that the area is subject to theft from freight trains, and an employee riding the freight train could identify the problem early, or plausibly prevent the thefts, for example with bribes given to the thieves.
If you’re interested, there are various methods of causal analysis without probabilities. You can find them discussed in a manual from the company behind Flying Logic software, or in books about the Goldratt methods of problem-solving, and perhaps in other places.
It would be difficult to learn that your actions in your career increase poverty in or force migration from a country receiving financial support for an EA charity, particularly when you also contribute a good chunk of your income to that charity. If you work for some banks or finance institutions, then it is plausible that you are causing some of the problem that you intend to correct.
There could be intuitive matching algorithms put to use that let a person quantify the match they determine between the various causal pathways in their ontology and real-world events. I am just speculating, but those algorithms could serve the same purpose as subjective probabilities in forecasting. The study of case-based reasoning could offer some insights.