I think the “ideology” idea is about the normative specification of what EA considers itself to be, but there seem to be 3 waves of EA involved here:
the good-works wave, about cost-effectively doing the most good through charitable works
the existential-risk wave, building more slowly, about preventing existential risk
the longtermism wave, some strange evolution of the existential risk wave, building up now
I haven’t followed the community that closely, but that seems to be the rough timeline. Correct me if I’m wrong.
From my point of view, the narrative of ideology is about ideological influences defining the obvious biases made public in EA: free-market economics, apolitical charity, the perspective of the wealthy. EA’s are visibly ideologues to the extent that they repeat or insinuate the narratives commonly heard from ideologues on the right side of the US political spectrum. They tend to:
discount climate change
distrust regulation and the political left
extoll or expect the free market’s products to save us (TUA, AGI, …)
be blind to social justice concerns
see the influence of money as virtuous, they trust money, in betting and in life
admire those with good betting skills and compare most decisions to bets
see corruption in government or bureaucracy but not in for-profit business organizations
emphasize individual action and the virtues of enabling individual access to resources
I see those communications made public, and I suspect they come from the influences defining the 2nd and 3rd waves of the EA movement, rather than the first, except maybe the influence of probabilism and its Dutch bookie thought experiment? But an influx of folks working in the software industry, where just about everyone sees themselves as an individual but is treated like a replaceable widget in a factory, know to walk a line, because they’re still well-paid. There’s not a strong push toward unions, worker safety, or ludditism. Social justice, distrust of wealth, corruption of business, failures of the free market (for example, regulation-requiring errors or climate change), these are taboo topics among the people I’m thinking of, because it can hurt their careers. But they will get stressed over the next 10-20 years as AI take over. As will the rest of the research community in Effective Altruism.
Despite the supposed rigor exercised by EA’s in their research, the web of trust they spin across their research network is so strong that they discount most outside sources of information and even have a seniority-skewed voting system (karma) on their public research hub that they rely on to inform them of what is good information. I can see it with climate change discussions. They have skepticism toward information from outside the community. Their skepticism should face inward, given their commitments to rationalism.
And the problem of rationalized selfishness is obvious, big picture obvious, I mean obvious in every way in every lesson in every major narrative about every major ethical dilemma inside and outside religion, the knowledge boils down to selfishness (including vices) versus altruism. Learnings about rationalism should promote a strong attempt to work against self-serving rationalization (as in the Scout Mindset but with explicit dislike of evil), and see that rationalization stemming from selfishness, and provide an ethical bent that works through the tension between self-serving rationalization and genuine efforts toward altruism so that, if nothing else, integrity is preserved and evil is avoided. But that never happened among EA’s.
However, they did manage to get upset about the existential guilt involved in self-care, for example, when they could be giving their fun dinner-out money to charity. That showed lack of introspection and an easy surrender to conveniently uncomfortable feelings. And they committed themselves to cost-effective charitable works. And developing excellent models of uncertainty as understood through situations amenable to metaphors involving casinos, betting, cashing out, and bookies. Now, I can’t see anyone missing that many signals of selfish but naive interest in altruism going wrong. Apparently, those signals have been missed. Not only that, but a lot of people who aren’t interested in the conceptual underpinnings of EA “the movement” have been attracted to the EA brand. So that’s ok, so long as all the talk about rationalism and integrity and Scout Mindset is just talk. If so, the usual business can continue. If not, if the talk is not just smoke and mirrors, the problems surface quick because EA confronts people with its lack of rationality, integrity, and Scout Mindset.
I took it as a predictive indicator that EA’s discount critical thinking in favor of their own brand of rationalism, one that to me lacks common-sense (for example, conscious “updating” is bizarrely inefficient as a cognitive effort). Further, their lack of interest in climate destruction was a good warning. Then the strange decision to focus ethical decisions on an implausible future and the moral status of possibly existent trillions of people in the future. The EA community shock and surprise at the collapse of SBF and FTX has been further indication of what is a lack of real-world insight and connection to working streams of information in the real world.
It’s very obvious where the tensions are, that is, between the same things as usual: selfishness/vices and altruism. BTW, I suspect that no changes will be made in how funders are chosen. Furthermore, I suspect that the denial of climate change is more than ideology. It will reveal itself as true fear and a backing away from fundamental ethical values as time goes on. I understand that. If the situation seems hopeless, people give up their values. The situation is not hopeless, but it challenges selfish concerns. Valid ones. Maybe EA’s have no stomach for true existential threats. The implication is that their work in that area is a sham or serves contrary purposes.
It’s a problem because real efforts are diluted by the ideologies involved in the EA community. Community is important because people need to socialize. A research community emphasizes research. Norms for research communities are straightforward. A values-centered community is … suspect. Prone to corruption, misunderstandings about what community entails, and reprisals and criticism to do with normative values not being served by the community day-to-day. Usually, communities attract the like-minded. You would expect or even want homogeneity in that regard, not complain about it.
If EA is just about professionalism in providing cost-effective charitable work that’s great! There’s no community involved, the values are memes and marketing, the metrics are just those involved in charity, not the well-being of community members or their diversity.
If it’s about research products that’s great! Development of research methods and critical thinking skills in the community needs improvement.
Otherwise, comfort, ease, relationships, and good times are the community requirements. Some people can find that in a diverse community that is values-minded. Others can’t.
A community that’s about values is going to generate a lot of churn about stuff that you can’t easily change. You can’t change the financial influences, the ideological influences, (most of) the public claims, and certainly not the self-serving rationalizations, all other things equal. If EA had ever gone down the path of exploring the trade-offs between selfishness and altruism with more care, they might have had hope to be a values-centered community. I don’t see them pulling that off at this point. Just for their lack of interest or understanding. It’s not their fault, but it is their problem.
I favor dissolution of all community-building efforts and a return to research and charity-oriented efforts by the EA community. It’s the only thing I can see that the community can do for the world at large. I don’t offer that as some sort of vote, but instead as a statement of opinion.
Key Climate Reports: The 6th (latest) Assessment Reports and additional reports covering many aspects of climate, nature, finance related to climate change prevention, mitigation and adaptation.
Emissions Gap Report: the gap refers to that between pledges and actual reductions as well as pledges and necessary targets.
Archive of Publications and Data: all Assessment Reports prior to the latest round. In addition, it contains older special reports, software and data files useful for purposes relevant to climate change and policy.
TIP: The IPCC links lead to pages that link to many reports. Assessments reports from the three working groups contain predictions with uncertainty levels (high, medium, low), and plenty of background information, supplementary material, and high-level summaries. EA’s might want to start with the Technical Summaries from the latest assessment report and drill-down into full reports as needed.
Start a debate with another party if all of the following are true:
I have resources to participate in a debate.
the debate question interests me.
I want to debate.
the other party wants to debate.
we both seem interested in truth-building (either as a scout or a soldier, using Galef’s model).
Instead of start a debate, offer information to another party if all of the following are true:
I have information to offer.
I want to offer information.
The other party is willing to receive information or the debate involves an open forum that includes third parties.
either I or the other party do not want to debate.
the other party could benefit from the information or the information clarifies my position to relevant third parties.
End a debate with another party if any of the following are true:
the debate question is now settled to the satisfaction of both parties.
the debate topic no longer interests me.
either I or the other party do not want to debate anymore.
one party is not interested in truth-building anymore.
Answers to potential questions about this policy
What does “truth-building between two parties” mean with regard to a debate question?
By “truth-building between two parties”, I mean:
both parties seek the truth.
both parties share truthful information with each other that helps answer the debate question.
neither party withholds truthful information from the other that would help answer the debate question.
neither party deceives the other party with false information that is ostensibly intended to help answer the debate question.
Why does a debate involve a debate question?
Any topic of debate that my policy addresses is more than a research topic. It is a topic about which two parties have conflicting beliefs. For my purposes, a conflict between two beliefs can be understood as a question, either a yes/no question or a multiple-choice question. Therefore, any topic of debate can be considered a question to answer between the two parties involved in a debate.
It’s important to recognize the troubles posed by deepfakes. Stable Diffusion makes those troubles real. Its use, enhanced and unfettered, poses a genuine threat to human culture and society, because it will make the source and representation of any image or video indistinguishable from those produced by some authentic means. For example, historical imagery can be reliably faked, along with crime footage, etc, etc[1]. But that is not why I wrote this shortform.
Stable Diffusion was put out, open-source, with no debate, and no obstacles other than technical and funding ones. You AI safety folks know what this means, when big players like Google, Microsoft, and Open-AI are producing AI art models with restrictions of various sorts, and a start-up comes along and releases a similar product with no restriction. Well, the license says you cannot use it for certain things. Also, it has a safety feature, that you can turn off. I believe that people are turning it off.
Every person discussing Dall-e2 and its competitors and their restrictions and limitations are not really having the same conversation any more. Now the conversation is what to do when that technology is let loose on the web, unrestricted, open-source, can be developed further, and is put to use freely. Hmm.
I hope there is some element of the AI safety community looking at how to handle the release of AGI software (without safeguards) into the global software community. Clearly, there is only so much you can to do put safeguards on AGI development. The real question will be what to do when AGI development occurs with no safeguards and the technology is publicly available. I see the parallel easily. The same ethical concerns. The same genuine restraint on the part of large corporations. And the same “oh well” when some other company doesn’t show the same restraint.
When society includes widespread use of life extension technology, a few unhealthy trends could develop.
the idea of being “forced to live” will take on new meaning and different meaning for folks in a variety of circumstances, testing institutional standards and norms that align with commonly employed ethical heuristics. Testing of the applicability of those heuristics will result in numerous changes to informed and capable decision-making in ethical domains.
life-extension technology will become associated with longevity control, and that will include time and condition in which one passes away. At the moment, that is not a choice. In future, I expect society will legalize choice of life length (maybe through genetic manipulation of time of death), or some proxy for a genetically programmed death (for example, longevity termination technologies). I suspect that those technologies will be abused in a variety of contexts (for example, with unwilling users).
longevity technology will substitute for health treatment, that is, behaviors that encourage healthy longevity and preventive medical care will be replaced by health-reducing behaviors whose consequences are treated with frequent anti-aging treatments.
Frustration with inadequate resilience of physique against typical personal health-reducing behaviors will encourage additional technology explorations to allow health-reducing behaviors without physical consequences. The consequence of this relevant to me is the lack of development and exploration of ability to choose alternatives to health-reducing behaviors.
NOTE: Human experience, is typically defined by experience of ourselves at various biological stages of life. While we can shorten or extend various stages of life, and people typically want the biological health, maturity and looks of a 20-something for as long as possible, we actually do experience ourselves and our relationship with others in terms of our true ages.
Newcomb’s problem, honesty, evidence, and hidden agendas
Thought experiments are usually intended to stimulate thinking, rather than be true to life. Newcomb’s problem seems important to me in that it leads to a certain response to a certain kind of manipulation, if it is taken too literally. But let’s assume we’re all too mature for that.
In Newcomb’s problem, a person is given a context, and a suggestion, that their behavior has been predicted beforehand, and that the person with that predictive knowledge is telling them about it . There are hypothetical situations in which that knowledge is correct, but Newcomb’s problem doesn’t appear to be one of them.
But to address the particulars I will focus on testing the scientist’s honesty and accuracy. Let’s recap quickly:
the scientist claims to make a prediction, and that the prediction determines one of two possible behavioral options. You take two boxes from the scientist, or take the opaque one only.
the scientist claims to make a decision about whether to put $1,000,000 in an opaque box before interacting with a person(you) who enters the scientist’s tent using a brain scan machine posted at the tent entrance. The brain scan machine gives the scientist a signal about what you’re likely to do, and the scientist either puts a million in the opaque box, or not. In addition, there’s a clear box in the tent containing $1000.
you can’t see what’s in the opaque box the whole time you’re in the tent. You can see the $1000 the entire time.
if the scientist believes what they claim, then the scientist thinks that interaction with you will have no affect on what you do once you walk in the tent. It was decided when you walked through the door. In other words, in the scientist’s mind, no matter what the scientist or you would otherwise do, only one of two outcomes will occur. You will take both boxes or just the opaque box.
So here’s what I think. There are far more situations in life where someone tells you a limited set of your options from a larger set than there are situations in which someone tells you your full set of options. The scientist claimed only two outcomes would occur (put differently, you would do one of two things). The scientist supposedly has this brain scan technology that tells them what your two options are, and the scientist is confident that the technology works. Your willingness to believe the scientist at all depends on the scientist’s claims being believed in their entirety. That means the scientist’s claims about the reliability of the machine as well. Once some claims show as false, you have reason to question the rest. At that point, the thought experiment’s setup fails. Let’s test the scientist’s claims.
So, don’t take either box. Instead, walk out of the tent. If you make it out without taking any boxes, then you know that the scientist was wrong or lying about what you would do. You did not take any boxes. You just left both boxes on the table. Now, think this over. If the scientist was sincere, then there’s a mad scientist with a $1,001,000 in the tent you just walked out of who either thought you would follow their instructions or thought that they had predicted you so well that they could just tell you what you would do. If the scientist was not sincere, then there’s a lying and manipulative scientist in the tent with a $1,000 and an opaque mystery box that they’re hoping you’ll take from them.
BTW: If someone gives me free money, even a $1000, to take a mystery package from them, I decline.
But, you say, “I think it’s understood that you could walk out of the tent, or start a conversation, maybe even ask the scientist about the opaque box’s contents, or do other things instead.” However, if that’s so, why couldn’t you just take the $1000, say thanks, and leave rather than take the opaque box with you? What constrained your freedom of choice?
Was it the mad scientist? Did the mad scientist zipper the tent entrance behind you and booby-trap the boxes so you either take both boxes or just the opaque one? Is the scientist going to threaten you if you don’t take either box? If so, then you’ve got a mad scientist who’s not only interested in predicting what you do, but also interested in controlling what you do, by constraining it as much as they can. And that’s not the thought experiment at all. No, the thought experiment is about the scientist predicting you, not controlling you, right? And you’re an ethical person, because otherwise you would shake the scientist down for the million still in the tent, so we’ll ignore that option.
However, in case the thought experiment is about the scientist controlling you, well, I would leave the tent immediately and be grateful that the scientist didn’t choose to keep you there longer. That is, leave if you can. Basically, it seems that if you do anything too creative in response to the scientist, you could be in for a fight. I would go with trying to leave.
But lets assume you don’t believe that the scientist is controlling you in any way, something about controlling you seems like a different thought experiment. Lets just go with you walking out of the tent without any boxes.
Catch your breath, think over what happened, and don’t go back in the tent and try to interact with the scientist anymore. Remember, anyone willing to do that sort of thing to strangers like you is plausibly a desperate criminal wanting you to take a mysterious package from them. Or a distraught (and plausibly delusional) scientist who you just proved has a worthless brain scan machine that they wasted millions of dollars testing.
EDIT: ok, so in case it’s not obvious, you disproved that the scientist’s brain scanner works. It predicted two behavioral outcomes, and you chose a third from several, including:
trying to take the $1000 out of the clear box and leaving the opaque box behind
shaking down the scientist for the million presumably in the tent somewhere, if it’s not all in the two boxes
starting a conversation with the scientist, maybe to make a case that you really need a million dollars no matter what kind of decision-maker you are
leaving the tent asap
and plausibly others
By disproving that the brain scanner works reliably, you made a key claim of the scientist’s false: “my brain scanner will predict whether you take both boxes or only one”. Other claims from the scientist, like “I always put a million in the opaque box if my brain scanner tells me to” and “So far, my brain scanner has always been right” are now suspect. That means that the scientist’s behavior and the entire thought experiment can be seen differently, perhaps as a scam, or as evidence of a mad scientist’s delusional belief in a worthless machine.
You could reply:
“What if the brain scanning machine only works for those situations where you take both boxes or only the opaque box and then just leave?”: Well, that would mean that loads of people could come in the tent, do all kinds of things, like ransack it, or take the clear box, or just leave the tent while taking nothing, and the machine gives the scientist a bogus signal for all of those cases. The machine has, then, been wrong, and frequently.
“What if the brain scanner gives no signal if you won’t do one of the two things that the scientist expects?”: Interesting, but then why is the scientist telling you their whole speal (“here are two boxes, I scanned your brain when you came through the door, blah blah blah...”) after finding out that you won’t just take one of the two options that the scientist offers? After all, as a rational actor you can still do all the things you want to do after listening the scientist’s speal.
“Maybe the scientist changes their speal, adds a caveat that you follow their instructions in order for the predictions to work.” OK, then. Let’s come back to that.
“What if there are guards in the tent, and you’re warned that you must take either the opaque box or both boxes or the guards will fatally harm you?”: Well, once again, it’s clear that the scientist is interested in controlling and limiting your behavior after you enter the tent, which means that the brain scanner machine is far from reliable at predicting your behavior in general.
“Hah! But you will choose the opaque box or both boxes, under duress. This proves that some people are one-boxers and others are two-boxers. I got you!”: Well, some people would follow the scientist’s instructions, you’re right. Other people would have a panic attack, or ask the scientist which choice the scientist would prefer, or just run for their lives from the tent, or even offer the guards a chance to split the scientist’s money if the guards change sides. Pretty soon, that brain scanning machine is looking a lot less relevant to what the tent’s visitors do than the guards and the scientist are. From what I understand, attempting to give someone calm and reassuring instructions while also threatening their lives (“Look, just take the $1000 and the opaque box, everything will be fine”) doesn’t tend to work very well
“Wait a minute. What if the scientist has a brain scanning device that predicts 100′s of different behaviors you could do by scanning you as you walk in the tent, and …”: Let me stop you there. If the scientist needs that kind of predictive power, and develops it, it’s _ to know what to do_ when you walk in the tent, not just to know what you will do when you walk in the tent. And just because the scientist knows what you will do if you’re confronted with a situation, doesn’t mean that the scientist has a useful response to what you will do. At this point, whose decision-making is really under the microscope, the tent’s visitor, or the scientist’s?
“Let’s back this up. All we’re really thinking about is someone who willingly participates in the scientist’s game, trusts the scientist, and follows the scientist’s instructions. Aren’t you just distorting the experiment’s context?”If someone claims to be able to predict your behavior, and the only way for their predictions to ever seem accurate is for you to play along with the options they provide, then don’t you see that dishonesty is already present? You are the one being dishonest, or you both are. You’re playing along with the mad scientist, or the mad scientist isn’t mad at all, but has some ulterior motive for wanting you take an opaque box with you, or otherwise participate in their bizarre game. The predictions aren’t really about what you would do if confronted with two boxes in such a situation. The predictions are make-believe that you play with someone with boxes in a tent, and only if you’re that kind of person. Not everyone is.
No, you just said that the visitor to the tent is ‘playing along’. But the thought experiment is about someone who trusts the scientist , and playing along is not trusting the scientist .” Yes, exactly the kind of thing that I’ve been cautioning you about. Don’t be one of those people. There are people who trust you and select among the options you give them for whatever reason you offer, no matter how contrary to existing evidence (e.g., of their own free will) the option selection is. Their decision strategies do not include acting on good evidence or understanding causality very well. And such people would likely leave with just the opaque box, and, if the scientist is to be believed, will be rewarded for it with a million dollars. However, they fall for every magic trick, and do not gather evidence carefully.
No, no, it’s not a magic trick. The thought experiment says that the scientist is really checking the brain scanning machine and putting the money in the opaque box, or not, according to what the machine says, and then making the same claims to every visitor about how the whole experiment works, and asking the visitors to participate according to the scientist’s simple instructions. All along you’ve been distorting this every which way. The machine could fail, but we know it succeeds. It succeeds with everybody, and the point of the thought experiment is just to think through what you ought to do in that situation, to get the most money, if you agree to the scientist’s terms. The only way to prove the scientist is wrong as a single visitor is to do everything right, leave with the opaque box only, but then find nothing inside. But we know that never happens. I see. Yeah. OK! I think you’ve changed the experiment a little though. Before, it was just, walk in, and get predicted. Now, it’s walk in and choose to cooperate, and the scientist is telling the truth, and the brain scanning machine appears to work, and then get predicted. And you can’t just play along, a visitor has to believe the scientist, and for good reason, in order to for people to draw any conclusions about what the experiment means.
“What? No, you don’t have to believe the scientist. You can play along, get some money, just choose one or two boxes. That’s what everyone should do, and the experiment shows it.” Some people would do that. We might as well flip a coin, or just pretend that we have reason to believe the scientist’s claim for causal reasons, and make up a causal reason. How about something like, “Hey, that million in the opaque box is like Schrodinger’s cat.” Maybe we make up a causal reason in hindsight after we find that million in the opaque box and leave the clear box behind. However, “rational” people would only follow the instructions if they believed the evidence warranted it, then those “rational” people would explore the reasons why. As far as I know, this thought experiment is supposed to mean that evidential and causal decision theory can conflict, but in fact, I think it only means that causal decisions can be revised based on new evidence. For example, brain scanner prediction, mind control, subtle influence by the scientist, money teleportation, time travel by someone observing you and taking the money back in time, or an unlikely string of random predictive success by a totally useless brain scanner, all potential explanations of the reason that the scientist’s machine would appear to work, if you decided to test if it works by taking the opaque box.
So what? Then the thought experiment only applies to people who follow instructions and trust the scientist and have good reason to trust the scientist’s claims, if you accept the idea that it’s supposed to distinguish evidential and causal decision theory. All your discussion of it managed to do was convince me that the thought experiment is well-designed, but also plausible. I think brain scanners like that, that work specific to a context where you choose to follow instructions, are plausible. If they were built, then setting something like this up in real life would be easy.” Yeah, and expensive. Plenty of people would take the opaque box only. I think this makes me want to revise the definition of “plausible” a little bit, for myself. I would just leave the tent. Julia Galef also thinks that such devices as brain scanners are plausible, or she claimed that, in her old video. So you’re in good company.
There’s this thing, “the repugnant conclusion”. It’s about how, if you use aggregate measures of utility for people in a population, and consider it important that more people each getting the same utility means more total utility, and you think it’s good to maximize total utility, then you ought to favor giant populations of people living lives barely worth living.
Yes, it’s a paradox. I don’t care about it because there’s no reason to want to maximize total utility by increasing a population’s size that I can see. However, by thinking so, I’m led down a different path. I’m not a utilitarian, but I check in with the utilitarian perspective to understand some things better.
The form of utilitarianism that I introduce below is my best utilitarian perspective. I created it as part of rejecting the repugnant conclusion. I’ll let you ask the interested questions, if you have any, lol. Here it is.
Imagine an accounting system that, for each person, measures the utility, positive and negative, of that person’s actions for other people. Your own personal utilitarian ledger, but lets assume someone else keeps it for you. That other person knows every action you take and what positive or negative utility that your actions create.
If the term “utility” confuses you, think of other terms, like:
benefit or harm
happiness or suffering
gain or loss
pleasure or pain
improvement or decline
For example, positive utility that you create for someone could be an improvement in their health.
Your ledger holds information about what you cause people everywhere, millions, billions, even trillions of people, now and in the future. Well, OK, that’s only if you consider individuals from various other species as deserving a page in your ledger.
How would I make this ledger work? Here’s what I would do:
Put aside the mathematical convenience of aggregate measures in favor of an individual accounting of utility. If you can track the utility you cause for even two other people, your ledger keeper should be able to do it for two hundred billion, right? Sure.
Set up a few rules to handle when people cease to exist. Those rules should include:
Once a person’s existence ends, you can no longer create utility for that person. Accordingly, there should be no new entries onto your ledger about that person. Prior utility accounting associated with a person from when they were alive can be kept but not altered unless to better reflect utility that you created for the person when the person was still living.
Ledger entries associated with people who were expected to be conceived but are no longer expected to be conceived must be deleted entirely from the ledger, because those entries apply to a never-existent person. They are bogus.
Entries about the utility of termination of existence (death) that you (inadvertently) cause others should be full and complete, applying to all those affected by a death who are not the dead person, including everyone still living and who will be conceived that get positive or negative utility from the person’s death.
The suffering or happiness involved in the person’s going through the process of dying should also be considered negative or positive utility and accounted for accordingly. A painful, slow death is a large negative harm to inflict on someone, whereas a quick, painless death in the presence of loving family is an improvement over a painful slow death, all other things equal.
Do not record death itself as a change in utility. The fact of death itself should not be recorded as a negative (or positive) utility applying to the now nonexistent person. There are still all the harms of death noted previously. Aside from those however, the only change recorded on the ledger to the dead person’s utility is that there are no longer events generating new utility for the person because the person no longer exists.[1]
Do not record intended consequences as creating utility just because they were intended. That is a different form of morality tracking, to do with keeping a record of a person’s character. On the utilitarian ledger, only actual utility gets recorded in an entry.
Other than those changes, I think you can go ahead and practice utilitarianism as you otherwise would, that is, doing the the greatest good for the greatest number, and considering all people as equally deserving of consideration.
Utilitarianism developed in that way does not offer the typical problems of:
aggregate measures (average, total, variance) screwing up determination of utility maximization for many individuals
bogus accounting of utility intended for nonexistent or never-existent people.
bogus accounting of utility intended to be created for existent people but not actually created.
This personal utilitarian ledger only tells you about actual utility created in a single shared timeline for a population of individuals. Intentions and alternatives are irrelevant. Disliking death or liking for having children are similarly irrelevant unless contradiction of those values is considered a negative utility created for existent people. Of course there’s still the harms to existent others associated with death or absence of conception that are recorded in the ledger. And, the welfare of the population as a whole is never actually considered.
An extension to the accounting ledger, one that tracks consequences of actions for your utility, would record your actions including such interesting ones as actions to make hypothetical people real or to extend the lives of existing people. The extension would record actual consequences for you even if those actions create no utility for other existing people. You might find this extension useful if, as someone with a ledger, you want to treat your own interests as deserving equal consideration compared to other’s interests.
For me, a utilitarian ledger of this sort, or a character ledger that tracks my intentions and faithfully records evidence of my character, would provide a reference point for me to make moral judgments about me. Not a big deal, but when you look at something like the repugnant conclusion, you could ask yourself, “Who does this apply to and how?” I don’t require that I practice utilitarianism, but in a context where utilitarian considerations apply, for example, public policy, I would use this approach to it. Of course, I’m no policy-maker, so this ledger is little more than a thought experiment.
[1]
The only exception would be error-correction events to revise old utility information from when the person was living. Error-correction events only occur when the ledger keeper corrects a mistake.
As close as I get to epistemic status: “Mm, yes, this type of argument that I’m making is usually self-serving”
Common heuristics to let me assume an argument is self-serving or lacks intellectual honesty include:
the argument justifies (rationalizes) a vice. For example, I watch too much junk TV on netflix, and commonly tell myself that I do it to relax. However, it is problematic, and I have better ways to relax than watch movies.
the argument justifies a plan that offers social acceptance, career development, or another important gain. I can backtrack from any convenient conclusion and ask myself, “Hmm, am I leaving out some important bit of contrary information, for example, that bears on other people’s well-being or some principle I keep or rule that I follow?”
the argument’s conclusion makes me feel sad or lonely or angry or ashamed about accepting the argument’s conclusion. Sometimes intellectual honesty is revealed by my feelings, but not always. Deciding on what my feelings mean takes some introspection and my ignorance makes this heuristic unreliable. Sometimes, though, a bothersome feelings is an important signal of a personal standard or preference that is not being met by my own rationalized behavior.
the argument’s entire structure changed with a change in what the conclusion would mean for me. For example, my argument for why I always take the trash out every evening can be quickly followed by why it’s OK to let the trash fester in the bin, if it happens to be raining hard outside and I don’t feel like getting wet.
the argument moves my attention to something else, and doing so changes my feelings for the better, but at the cost of my ignoring what was bothering me. My feelings don’t always correspond to a specific focus of my attention. I can focus on something else, and feel good about that, but as a consequence, ignore something that feels less good. At that point, I need other means, for example, cognitive aids or friendly helpers that remind me of what is relevant to problem-solve and give me pointers on how to deal with it.
the argument justifies idiocy. I keep a short list of human tendencies that I consider idiotic (for example, sadism), and most people don’t display them most of the time, but any argument that justifies them or enables them is suspect to me. Which actually starts a more difficult intellectual enquiry than one would expect.
a person stating their own position holds a dissonant perspective for a later time People typically put effort in one direction and then take a break from that to be somebody else. For example, they trade in their priest’s collar for a roughneck jacket and go act like a jerk. By day, they’re a mild-mannered gentle person, but by night, they’re a gun-toting crazy vigilante! Or whatever. Point is, they adopt principles like it’s a part-time job or a uniform to wear. Statements about principles from those people, wonderful though those people may be, are self-serving or outright lies, typically.
The list of minor examples of muddling solutions includes:
driving through traffic on the freeway and reaching a standstill. Bored,you turn on a podcast.
visiting a doctor, she informs you that you have a benign but growing tumor. Upset, you schedule an inexpensive surgery to remove it.
coming home, you find a tree branch broke through your attic window. Annoyed, you call a repairman to replace the window.
walking from your home to a nearby convenience store, you step in some smelly dog poop. Upset, you scrape some of it off with a twig and wash your shoe bottom after turning around and reaching home.
turning on the television, you see an emergency broadcast of strong winds in your area. Alarmed, you close the now-rattling windows.
sitting at your desk at work, the power goes out and your UPS starts beeping. Concerned, you quickly save your work on your computer.
preparing dinner for friends, you receive a text from a relative wanting dinner as well. Obligated, you agree and prepare more food.
These are small examples of muddling solutions. In the middle of your life, an unexpected problem arises. It could be worse, but you have some emotional response to it as you engage with the problem to solve it. The available system supports you in doing so. You take the actions you needed and succeed, and can then go on with your life.
These are not immediate life and death decisions, but if the problem were to go on, they could inconvenience you or others in various ways. Either way, you muddle through. Perhaps the solution solves the problem completely, perhaps not. Other muddling solutions might be required, as events warrant. Whatever the case, there’s a typical pattern we know to follow. As problems arise, we solve them, and move forward.
Muddling through those situations toward a solution involves:
doing some routine process.
receiving disruptive news.
responding with a resourceful action (or not).
returning to our routine process.
repeating the above as much as necessary for as long as we can.
That’s what it means to muddle through.
What happens when we cannot muddle through
If you review the list of examples that I gave, you’ll see that each example relies on some resources being present, for example, a repairperson and money to pay her, or a local hospital that will do an inexpensive surgery, or the health insurance to pay for the surgery. When considering a scenario, sensitivity in the scenario to what happens when muddling solutions cannot be carried out gives you a sense of how civilization can fail as well.
What happens when the water doesn’t come from the tap? When the gas station has no gas? When insurance is not available at all? When there are no police or firemen or open hospitals or pharmacies?
Everyone understands the answers to those questions easier than abstractions about risk management or high-impact events.
In general, the idea that others will mitigate a crisis for you makes sense, that is why we have government and public institutions and businesses in place. They help us muddle through. However, depending on the root causes of a crisis and the size of the crisis, that mitigation might not be possible. In that case, civilization has collapsed.
The collapse of civilization is not that difficult to think through, put in terms of muddling through life, and then finding that everyone cannot because our civil, economic, or technological support systems have stopped working.
When exploring an area of knowledge with others, we can perform in roles such as:
Truth-building roles: mutual truth-seeking involving exchange of truthful information
scout (explores information and develops truthful information for themselves)
soldier (attacks and defends ideas to confirm their existing beliefs)
Manipulative roles: at least one side seeking to manipulate the other without regard for the other’s interests
salesperson (sells ideas and gathers information)
actor/actress (performs theatrics and optionally gathers information)
The Scout and Soldier model breaks down during communication when people believe that:
the truth is cheap and readily accessible, and so communication about important topics should serve other purposes than truth-building.
everyone else seems to be engaged in manipulating, either through lying or withholding information
withdrawal from engagement with others seems appropriate
joining in with other’s theatrics or sales efforts seems appropriate.
One of several lessons I draw from Galef’s excellent work is the contrast between those who are self-serving and those who are open to contradiction by better information. However, a salesperson can gather truthful information from you, like a scout, develop an excellent map of the territory with your help, and then lie to your face about the territory, leaving you with a worse map than before. Persons in the role of actors can accomplish many different goals with their theatrics, none of which help scouts develop truthful information.
Applying Heuristics about Aversive Experience Without Regard for Theories of Consciousness
TL;DR
Consciousness should have an extensional definition only. Misconstrual or reconception of the meaning of consciousness is an error. Robots, software agents, and animals can suffer aversive experience. Humans have heuristics to judge whether their own behavior inflicts aversive experience on other beings.
Those heuristics include:
that some behavior is damaging to the entity
that an entity can feign aversive experience
that reasonable people think some behavior is aversive
that the entity has something like system 1 processing.
Consciousness
Typically, an extensional definition of consciousness is a list of measured internal activity or specific external behavior associated with living people. Used correctly, “consciousness” has an extensional definition only. The specific items in the list to which “consciousness” refers depends on the speaker and the context.
In a medical context, a person:
shows signs of consciousness (for example, blinking, talking).
loses consciousness (for example, faints).
In an engineering context, a robot:
lacks all consciousness (despite blinking, talking).
never had consciousness (despite having passed some intelligence tests).
When misconstrued, the term “consciousness’ is understood to refer to an entity separate from the entity whose behavior or measured internal activity[1] the term describes (for example, consciousness is thought of as something you can lose or regain or contain while you are alive).
When reconceived, a user of the term “consciousness” summarizes the items with an intensional definition.
An extensional definition of consciousness can mismatch the intensional definition.
For example, a nurse might believe that a person still has not actually regained their consciousness after a medical procedure that brings the person back to life 20 minutes after death even though the person now appears alert, speaks, eats, and is apparently mentally healthy.
Another example would be a robot that demonstrates external behaviors and measured internal activity associated with human-like intelligence but that humans assume is not in fact a person.
Without an intensional definition of consciousness, dialog about whether aversive experience happens can fail. However, if you accept that your own subjective experience is real, and grant that others can have similar experience, then you can still apply heuristics to decide whether other beings have aversive experience. Those heuristics can build on your own experience and common-sense.
Heuristics about Aversion
I believe that humans will mistreat robotic or software or animal entities. Humans could try to excuse the mistreatment with the belief that such entities do not have consciousness or aversive experience. This brings to mind the obvious question: what is aversive experience?
Here are several heuristics:
If behavior damages an entity, then the behavior causes aversive experience for the entity.
if an entity can feign or imitate[2] aversive experience, then it can experience aversive experience.
if reasonable people reasonably interpret some actions done to the entity as aversive to an entity, then those actions are aversive to the entity.
if the entity has something like system 1 processing[3] , then it can experience aversive experience.
I’m sure there are more common-sense heuristics, but those are what I could think of that might forestall inflicting aversive experience on entities whose subjective experience is a subject of debate.
Processes that it does not choose to follow but that instead yield their results for further processing. It might be necessary to assume that at least one of these processes runs in parallel to another which the entity can edit or redesign.
Most Ethical Dilemmas are Actually a Conflict of Selfish Interests with Altruistic Interests
Ethicists seem to value systems of ethics (I will also call those moral systems) that are rational or applicable in all contexts in which they would indicate a choice of action. For example, utilitarianism gets a lot of flack in some unlikely thought experiments where it seems unintuitive or self-contradictory.
I believe that the common alternative to rational (enough) moral or ethical action is amoral or selfish action. If you apply a moral system to making some decisions, surely you apply an amoral system to the rest, and that amoral system lets you serve selfish interests.
If so, then it’s important to make that explicit. For example, “Yeah, so I was trying to decide this with my usual ethical heuristics, like, let’s maximize everybody’s utility in this situation, but then it seemed hard to do that, and so I went with what I’d like out of this situation.”
And that’s my main suggestion, just don’t deceive yourself that your ethical system has you in knots. It’s probably your selfish interests in conflict with your altruistic interests.
For more of that, read on.
A good question would be whether it was genuinely difficult to decide what to ethically do, or whether it wasn’t that difficult, but the decision conflicted with your selfish interests. Did the ethical choice make you feel bad? Was it unfulfilling? Did it seem like it might harm you somehow? If so, then your selfish interests got in the way.
There’s even models of how to negotiate among competing moral paradigms now. As if you are really in a position to arbitratewhichmoral system you will use!? To me it’s absurd to believe that people apply their choices among ethical systems with any honesty, when they already have so many problems behaving ethically according to any system.
My heuristic based on extensive personal experience as well as repeated observation of people is that when serious ethical dilemmas come up, it is because a person is facing a choice between selfish and moral interests, not because they face a conflict between two ethical systems to which they give partial credence.
People can figure out what the right thing to do is pretty easily, but they have a hard time figuring out when they’re being self-serving or selfish.
To me, you can drop your moral reasoning at any time and choose your selfish reasoning, but not because that’s OK , moral, or reasonable. Instead, it’s because the application of ethical systems is more of an empirical question than anything else.
There are only a few linguistic and emotional confusions that people have around morality and selfishness. Take the word “care.” It’s confusing. If I care about someone, is it because:
their interests matter to me
my interests depend on their experience or behavior
If you take away my causal experience of the person:
their lovableness
the infectiousness of their happiness
their positive treatment of me
their personal significance in my life and memory
then what’s left to care about? Just them, and that’s not compelling me anymore with all the effects of that person’s experience and behavior gone. Then what should I do? Well obviously, keep being ethical toward them is the ethical thing to do.
And it seems you have to take away all those positive dependencies, all those effects someone has on you, just to hear someone say something like:
“Oh well then, you better balance what you do for them with some things you do for yourself, or you’re going to get hurt.”
Yeah, but that’s entirely too late to note that most decisions are about selfish and ethical interests, and that those two sets of interests are orthogonal. You can satisfy either or both or neither with every decision you make. If your goal in analyzing ethical systems is to be a better decision-maker overall, that probably starts with recognizing what interests are actually in play in your decisions all the time.
As an observer of my actions, keeping track of the consequences of my actions is a better proxy for my use of a moral system then keeping track of my stated intentions of my actions. However, as a performer of actions, I have to keep track of action intentions and action consequences, and try to align them as I get feedback on the outcomes of my actions.
There’s no moral argument for satisfying your own interests aside from how you might satisfy other’s interests somehow. Similarly, there’s no selfish argument for satisfying other’s interests aside from how you might satisfy your own interests somehow. Most of the hard work in discussion of ethical interests is in how to separate selfish interests from altruistic interests.
Ideology in EA
I think the “ideology” idea is about the normative specification of what EA considers itself to be, but there seem to be 3 waves of EA involved here:
the good-works wave, about cost-effectively doing the most good through charitable works
the existential-risk wave, building more slowly, about preventing existential risk
the longtermism wave, some strange evolution of the existential risk wave, building up now
I haven’t followed the community that closely, but that seems to be the rough timeline. Correct me if I’m wrong.
From my point of view, the narrative of ideology is about ideological influences defining the obvious biases made public in EA: free-market economics, apolitical charity, the perspective of the wealthy. EA’s are visibly ideologues to the extent that they repeat or insinuate the narratives commonly heard from ideologues on the right side of the US political spectrum. They tend to:
discount climate change
distrust regulation and the political left
extoll or expect the free market’s products to save us (TUA, AGI, …)
be blind to social justice concerns
see the influence of money as virtuous, they trust money, in betting and in life
admire those with good betting skills and compare most decisions to bets
see corruption in government or bureaucracy but not in for-profit business organizations
emphasize individual action and the virtues of enabling individual access to resources
I see those communications made public, and I suspect they come from the influences defining the 2nd and 3rd waves of the EA movement, rather than the first, except maybe the influence of probabilism and its Dutch bookie thought experiment? But an influx of folks working in the software industry, where just about everyone sees themselves as an individual but is treated like a replaceable widget in a factory, know to walk a line, because they’re still well-paid. There’s not a strong push toward unions, worker safety, or ludditism. Social justice, distrust of wealth, corruption of business, failures of the free market (for example, regulation-requiring errors or climate change), these are taboo topics among the people I’m thinking of, because it can hurt their careers. But they will get stressed over the next 10-20 years as AI take over. As will the rest of the research community in Effective Altruism.
Despite the supposed rigor exercised by EA’s in their research, the web of trust they spin across their research network is so strong that they discount most outside sources of information and even have a seniority-skewed voting system (karma) on their public research hub that they rely on to inform them of what is good information. I can see it with climate change discussions. They have skepticism toward information from outside the community. Their skepticism should face inward, given their commitments to rationalism.
And the problem of rationalized selfishness is obvious, big picture obvious, I mean obvious in every way in every lesson in every major narrative about every major ethical dilemma inside and outside religion, the knowledge boils down to selfishness (including vices) versus altruism. Learnings about rationalism should promote a strong attempt to work against self-serving rationalization (as in the Scout Mindset but with explicit dislike of evil), and see that rationalization stemming from selfishness, and provide an ethical bent that works through the tension between self-serving rationalization and genuine efforts toward altruism so that, if nothing else, integrity is preserved and evil is avoided. But that never happened among EA’s.
However, they did manage to get upset about the existential guilt involved in self-care, for example, when they could be giving their fun dinner-out money to charity. That showed lack of introspection and an easy surrender to conveniently uncomfortable feelings. And they committed themselves to cost-effective charitable works. And developing excellent models of uncertainty as understood through situations amenable to metaphors involving casinos, betting, cashing out, and bookies. Now, I can’t see anyone missing that many signals of selfish but naive interest in altruism going wrong. Apparently, those signals have been missed. Not only that, but a lot of people who aren’t interested in the conceptual underpinnings of EA “the movement” have been attracted to the EA brand. So that’s ok, so long as all the talk about rationalism and integrity and Scout Mindset is just talk. If so, the usual business can continue. If not, if the talk is not just smoke and mirrors, the problems surface quick because EA confronts people with its lack of rationality, integrity, and Scout Mindset.
I took it as a predictive indicator that EA’s discount critical thinking in favor of their own brand of rationalism, one that to me lacks common-sense (for example, conscious “updating” is bizarrely inefficient as a cognitive effort). Further, their lack of interest in climate destruction was a good warning. Then the strange decision to focus ethical decisions on an implausible future and the moral status of possibly existent trillions of people in the future. The EA community shock and surprise at the collapse of SBF and FTX has been further indication of what is a lack of real-world insight and connection to working streams of information in the real world.
It’s very obvious where the tensions are, that is, between the same things as usual: selfishness/vices and altruism. BTW, I suspect that no changes will be made in how funders are chosen. Furthermore, I suspect that the denial of climate change is more than ideology. It will reveal itself as true fear and a backing away from fundamental ethical values as time goes on. I understand that. If the situation seems hopeless, people give up their values. The situation is not hopeless, but it challenges selfish concerns. Valid ones. Maybe EA’s have no stomach for true existential threats. The implication is that their work in that area is a sham or serves contrary purposes.
It’s a problem because real efforts are diluted by the ideologies involved in the EA community. Community is important because people need to socialize. A research community emphasizes research. Norms for research communities are straightforward. A values-centered community is … suspect. Prone to corruption, misunderstandings about what community entails, and reprisals and criticism to do with normative values not being served by the community day-to-day. Usually, communities attract the like-minded. You would expect or even want homogeneity in that regard, not complain about it.
If EA is just about professionalism in providing cost-effective charitable work that’s great! There’s no community involved, the values are memes and marketing, the metrics are just those involved in charity, not the well-being of community members or their diversity.
If it’s about research products that’s great! Development of research methods and critical thinking skills in the community needs improvement.
Otherwise, comfort, ease, relationships, and good times are the community requirements. Some people can find that in a diverse community that is values-minded. Others can’t.
A community that’s about values is going to generate a lot of churn about stuff that you can’t easily change. You can’t change the financial influences, the ideological influences, (most of) the public claims, and certainly not the self-serving rationalizations, all other things equal. If EA had ever gone down the path of exploring the trade-offs between selfishness and altruism with more care, they might have had hope to be a values-centered community. I don’t see them pulling that off at this point. Just for their lack of interest or understanding. It’s not their fault, but it is their problem.
I favor dissolution of all community-building efforts and a return to research and charity-oriented efforts by the EA community. It’s the only thing I can see that the community can do for the world at large. I don’t offer that as some sort of vote, but instead as a statement of opinion.
Resources on Climate Change
IPCC Resources
The 6th Assessment Reports
The Summary for Policymakers (Scientific Basis Report,Impacts Report,Mitigation Report) NOTE: The Summaries for Policymakers are approved line-by-line by representatives from participating countries. This censors relevant information from climate scientists.
The Synthesis Report: this is pending in 2023
Climate Change: The Scientific Basis
Climate Change: Impacts
Climate Change: Mitigation
Key Climate Reports: The 6th (latest) Assessment Reports and additional reports covering many aspects of climate, nature, finance related to climate change prevention, mitigation and adaptation.
Emissions Gap Report: the gap refers to that between pledges and actual reductions as well as pledges and necessary targets.
Provisional State Of The Climate 2022: full 2022 report with 2022 data (reflecting Chinese and European droughts and heat waves) still pending.
United in Science 2022: A WMO and UN update on climate change, impact, and responses (adaptation and mitigation).
and many more. .. see the IPCC website for the full list.
Archive of Publications and Data: all Assessment Reports prior to the latest round. In addition, it contains older special reports, software and data files useful for purposes relevant to climate change and policy.
TIP: The IPCC links lead to pages that link to many reports. Assessments reports from the three working groups contain predictions with uncertainty levels (high, medium, low), and plenty of background information, supplementary material, and high-level summaries. EA’s might want to start with the Technical Summaries from the latest assessment report and drill-down into full reports as needed.
Useful Websites and Reports
IPBES Global Assessment Report On Biodiversity
The Intergovernmental Science-Policy Platform on Biodiversity and Ecosystem Services (IPBES)
The World Wildlife Fund Living Planet Report
Birdlife.org State Of The World’s Birds Report
Audobon Society Survival By Degrees Report
State of the Birds Report
NOAA Climate Models Website
Multiple Breadbasket Failures Pardee Report
UBC’s Sink Or Swim Report on South China Sea Fisheries
NASA Sea Level Modeling Website
Oceana Seafood Fraud and Mislabeling across Canada Report
Noteworthy Papers
Climate change is Increasing the risk of a California megaflood, 2022
Evidence for massive methane hydrate destabilization during the penultimate interglacial warming, 2022
Democratizing risk, 2022
World scientists warning of a climate emergency, 2022
Climate endgame: exploring catastrophic climate change scenarios, 2022
Economists’ erroneous estimates of damages from climate change,2021
Collision course development pushes Amazonia toward its tipping point, 2021
Permafrost carbon feedbacks threaten global climate goals, 2021
New climate models reveal faster and larger increases in Arctic precipitation than previously projected, 2021
The Quiet Crossing of Ocean Tipping Points, 2021
Future of the human climate niche, 2020
RCP 8.5 tracks human carbon emissions, 2020
The appallingly bad neoclassical economics of climate change, 2020
Thermal bottlenecks in the lifecycle define climate vulnerability of fish, 2020
The Quiet Crossing of Ocean Tipping Points, 2020
Large changes in Great Britain’s vegetation and agricultural land-use predicted under unmitigated climate change, 2019
Comment: Climate Tipping Points—Too Risky to Bet Against, 2019
Trajectories of the Earth system in the Anthropocene, 2018
The Interaction of climate change and methane hydrates, 2017
The Anthropocene Biosphere, 2015
High risk of extinction of benthic foraminifera in this century due to ocean acidification, 2013
Global Human Appropriation Of Net Primary Production Doubled In the 20th Century, 2012
Tipping Elements in the Earth’s climate system, 2008
Stabilization wedges: solving the climate problem for the next 50 years with current technologies, 2004
News and Opinions and Controversial Papers
Uncontrolled Chemical Releases: A Silent, Growing Threat
A Short Guide To The 6th Mass Extinction
Noah’s Debate Policy
Start a debate with another party if all of the following are true:
I have resources to participate in a debate.
the debate question interests me.
I want to debate.
the other party wants to debate.
we both seem interested in truth-building (either as a scout or a soldier, using Galef’s model).
Instead of start a debate, offer information to another party if all of the following are true:
I have information to offer.
I want to offer information.
The other party is willing to receive information or the debate involves an open forum that includes third parties.
either I or the other party do not want to debate.
the other party could benefit from the information or the information clarifies my position to relevant third parties.
End a debate with another party if any of the following are true:
the debate question is now settled to the satisfaction of both parties.
the debate topic no longer interests me.
either I or the other party do not want to debate anymore.
one party is not interested in truth-building anymore.
Answers to potential questions about this policy
What does “truth-building between two parties” mean with regard to a debate question?
By “truth-building between two parties”, I mean:
both parties seek the truth.
both parties share truthful information with each other that helps answer the debate question.
neither party withholds truthful information from the other that would help answer the debate question.
neither party deceives the other party with false information that is ostensibly intended to help answer the debate question.
Why does a debate involve a debate question?
Any topic of debate that my policy addresses is more than a research topic. It is a topic about which two parties have conflicting beliefs. For my purposes, a conflict between two beliefs can be understood as a question, either a yes/no question or a multiple-choice question. Therefore, any topic of debate can be considered a question to answer between the two parties involved in a debate.
Why have a debate policy at all?
Another user of the EA forum (Elliot Temple) follows a debate policy. He shared the idea and his reasons with the EA forum’s members. I agree with some of his reasons, and offer these additional reasons. A written debate policy:
is a record of rules of debate conduct.
lets me explore and formalize distinctions that are applicable to a debate process.
is part of a specification for a teaching bot’s chat style.
I read about Stable Diffusion today.
Stable Diffusion is an uncensored AI art model.
It’s important to recognize the troubles posed by deepfakes. Stable Diffusion makes those troubles real. Its use, enhanced and unfettered, poses a genuine threat to human culture and society, because it will make the source and representation of any image or video indistinguishable from those produced by some authentic means. For example, historical imagery can be reliably faked, along with crime footage, etc, etc[1]. But that is not why I wrote this shortform.
Stable Diffusion was put out, open-source, with no debate, and no obstacles other than technical and funding ones. You AI safety folks know what this means, when big players like Google, Microsoft, and Open-AI are producing AI art models with restrictions of various sorts, and a start-up comes along and releases a similar product with no restriction. Well, the license says you cannot use it for certain things. Also, it has a safety feature, that you can turn off. I believe that people are turning it off.
Every person discussing Dall-e2 and its competitors and their restrictions and limitations are not really having the same conversation any more. Now the conversation is what to do when that technology is let loose on the web, unrestricted, open-source, can be developed further, and is put to use freely. Hmm.
I hope there is some element of the AI safety community looking at how to handle the release of AGI software (without safeguards) into the global software community. Clearly, there is only so much you can to do put safeguards on AGI development. The real question will be what to do when AGI development occurs with no safeguards and the technology is publicly available. I see the parallel easily. The same ethical concerns. The same genuine restraint on the part of large corporations. And the same “oh well” when some other company doesn’t show the same restraint.
Life extension and Longevity Control
When society includes widespread use of life extension technology, a few unhealthy trends could develop.
the idea of being “forced to live” will take on new meaning and different meaning for folks in a variety of circumstances, testing institutional standards and norms that align with commonly employed ethical heuristics. Testing of the applicability of those heuristics will result in numerous changes to informed and capable decision-making in ethical domains.
life-extension technology will become associated with longevity control, and that will include time and condition in which one passes away. At the moment, that is not a choice. In future, I expect society will legalize choice of life length (maybe through genetic manipulation of time of death), or some proxy for a genetically programmed death (for example, longevity termination technologies). I suspect that those technologies will be abused in a variety of contexts (for example, with unwilling users).
longevity technology will substitute for health treatment, that is, behaviors that encourage healthy longevity and preventive medical care will be replaced by health-reducing behaviors whose consequences are treated with frequent anti-aging treatments.
Frustration with inadequate resilience of physique against typical personal health-reducing behaviors will encourage additional technology explorations to allow health-reducing behaviors without physical consequences. The consequence of this relevant to me is the lack of development and exploration of ability to choose alternatives to health-reducing behaviors.
NOTE: Human experience, is typically defined by experience of ourselves at various biological stages of life. While we can shorten or extend various stages of life, and people typically want the biological health, maturity and looks of a 20-something for as long as possible, we actually do experience ourselves and our relationship with others in terms of our true ages.
Newcomb’s problem, honesty, evidence, and hidden agendas
Thought experiments are usually intended to stimulate thinking, rather than be true to life. Newcomb’s problem seems important to me in that it leads to a certain response to a certain kind of manipulation, if it is taken too literally. But let’s assume we’re all too mature for that.
In Newcomb’s problem, a person is given a context, and a suggestion, that their behavior has been predicted beforehand, and that the person with that predictive knowledge is telling them about it . There are hypothetical situations in which that knowledge is correct, but Newcomb’s problem doesn’t appear to be one of them.
But to address the particulars I will focus on testing the scientist’s honesty and accuracy. Let’s recap quickly:
the scientist claims to make a prediction, and that the prediction determines one of two possible behavioral options. You take two boxes from the scientist, or take the opaque one only.
the scientist claims to make a decision about whether to put $1,000,000 in an opaque box before interacting with a person(you) who enters the scientist’s tent using a brain scan machine posted at the tent entrance. The brain scan machine gives the scientist a signal about what you’re likely to do, and the scientist either puts a million in the opaque box, or not. In addition, there’s a clear box in the tent containing $1000.
you can’t see what’s in the opaque box the whole time you’re in the tent. You can see the $1000 the entire time.
if the scientist believes what they claim, then the scientist thinks that interaction with you will have no affect on what you do once you walk in the tent. It was decided when you walked through the door. In other words, in the scientist’s mind, no matter what the scientist or you would otherwise do, only one of two outcomes will occur. You will take both boxes or just the opaque box.
So here’s what I think. There are far more situations in life where someone tells you a limited set of your options from a larger set than there are situations in which someone tells you your full set of options. The scientist claimed only two outcomes would occur (put differently, you would do one of two things). The scientist supposedly has this brain scan technology that tells them what your two options are, and the scientist is confident that the technology works. Your willingness to believe the scientist at all depends on the scientist’s claims being believed in their entirety. That means the scientist’s claims about the reliability of the machine as well. Once some claims show as false, you have reason to question the rest. At that point, the thought experiment’s setup fails. Let’s test the scientist’s claims.
So, don’t take either box. Instead, walk out of the tent. If you make it out without taking any boxes, then you know that the scientist was wrong or lying about what you would do. You did not take any boxes. You just left both boxes on the table. Now, think this over. If the scientist was sincere, then there’s a mad scientist with a $1,001,000 in the tent you just walked out of who either thought you would follow their instructions or thought that they had predicted you so well that they could just tell you what you would do. If the scientist was not sincere, then there’s a lying and manipulative scientist in the tent with a $1,000 and an opaque mystery box that they’re hoping you’ll take from them.
BTW: If someone gives me free money, even a $1000, to take a mystery package from them, I decline.
But, you say, “I think it’s understood that you could walk out of the tent, or start a conversation, maybe even ask the scientist about the opaque box’s contents, or do other things instead.” However, if that’s so, why couldn’t you just take the $1000, say thanks, and leave rather than take the opaque box with you? What constrained your freedom of choice?
Was it the mad scientist? Did the mad scientist zipper the tent entrance behind you and booby-trap the boxes so you either take both boxes or just the opaque one? Is the scientist going to threaten you if you don’t take either box? If so, then you’ve got a mad scientist who’s not only interested in predicting what you do, but also interested in controlling what you do, by constraining it as much as they can. And that’s not the thought experiment at all. No, the thought experiment is about the scientist predicting you, not controlling you, right? And you’re an ethical person, because otherwise you would shake the scientist down for the million still in the tent, so we’ll ignore that option.
However, in case the thought experiment is about the scientist controlling you, well, I would leave the tent immediately and be grateful that the scientist didn’t choose to keep you there longer. That is, leave if you can. Basically, it seems that if you do anything too creative in response to the scientist, you could be in for a fight. I would go with trying to leave.
But lets assume you don’t believe that the scientist is controlling you in any way, something about controlling you seems like a different thought experiment. Lets just go with you walking out of the tent without any boxes. Catch your breath, think over what happened, and don’t go back in the tent and try to interact with the scientist anymore. Remember, anyone willing to do that sort of thing to strangers like you is plausibly a desperate criminal wanting you to take a mysterious package from them. Or a distraught (and plausibly delusional) scientist who you just proved has a worthless brain scan machine that they wasted millions of dollars testing.
EDIT: ok, so in case it’s not obvious, you disproved that the scientist’s brain scanner works. It predicted two behavioral outcomes, and you chose a third from several, including:
trying to take the $1000 out of the clear box and leaving the opaque box behind
shaking down the scientist for the million presumably in the tent somewhere, if it’s not all in the two boxes
starting a conversation with the scientist, maybe to make a case that you really need a million dollars no matter what kind of decision-maker you are
leaving the tent asap
and plausibly others
By disproving that the brain scanner works reliably, you made a key claim of the scientist’s false: “my brain scanner will predict whether you take both boxes or only one”. Other claims from the scientist, like “I always put a million in the opaque box if my brain scanner tells me to” and “So far, my brain scanner has always been right” are now suspect. That means that the scientist’s behavior and the entire thought experiment can be seen differently, perhaps as a scam, or as evidence of a mad scientist’s delusional belief in a worthless machine.
You could reply:
“What if the brain scanning machine only works for those situations where you take both boxes or only the opaque box and then just leave?”: Well, that would mean that loads of people could come in the tent, do all kinds of things, like ransack it, or take the clear box, or just leave the tent while taking nothing, and the machine gives the scientist a bogus signal for all of those cases. The machine has, then, been wrong, and frequently.
“What if the brain scanner gives no signal if you won’t do one of the two things that the scientist expects?”: Interesting, but then why is the scientist telling you their whole speal (“here are two boxes, I scanned your brain when you came through the door, blah blah blah...”) after finding out that you won’t just take one of the two options that the scientist offers? After all, as a rational actor you can still do all the things you want to do after listening the scientist’s speal.
“Maybe the scientist changes their speal, adds a caveat that you follow their instructions in order for the predictions to work.” OK, then. Let’s come back to that.
“What if there are guards in the tent, and you’re warned that you must take either the opaque box or both boxes or the guards will fatally harm you?”: Well, once again, it’s clear that the scientist is interested in controlling and limiting your behavior after you enter the tent, which means that the brain scanner machine is far from reliable at predicting your behavior in general.
“Hah! But you will choose the opaque box or both boxes, under duress. This proves that some people are one-boxers and others are two-boxers. I got you!”: Well, some people would follow the scientist’s instructions, you’re right. Other people would have a panic attack, or ask the scientist which choice the scientist would prefer, or just run for their lives from the tent, or even offer the guards a chance to split the scientist’s money if the guards change sides. Pretty soon, that brain scanning machine is looking a lot less relevant to what the tent’s visitors do than the guards and the scientist are. From what I understand, attempting to give someone calm and reassuring instructions while also threatening their lives (“Look, just take the $1000 and the opaque box, everything will be fine”) doesn’t tend to work very well
“Wait a minute. What if the scientist has a brain scanning device that predicts 100′s of different behaviors you could do by scanning you as you walk in the tent, and …”: Let me stop you there. If the scientist needs that kind of predictive power, and develops it, it’s _ to know what to do_ when you walk in the tent, not just to know what you will do when you walk in the tent. And just because the scientist knows what you will do if you’re confronted with a situation, doesn’t mean that the scientist has a useful response to what you will do. At this point, whose decision-making is really under the microscope, the tent’s visitor, or the scientist’s?
“Let’s back this up. All we’re really thinking about is someone who willingly participates in the scientist’s game, trusts the scientist, and follows the scientist’s instructions. Aren’t you just distorting the experiment’s context?” If someone claims to be able to predict your behavior, and the only way for their predictions to ever seem accurate is for you to play along with the options they provide, then don’t you see that dishonesty is already present? You are the one being dishonest, or you both are. You’re playing along with the mad scientist, or the mad scientist isn’t mad at all, but has some ulterior motive for wanting you take an opaque box with you, or otherwise participate in their bizarre game. The predictions aren’t really about what you would do if confronted with two boxes in such a situation. The predictions are make-believe that you play with someone with boxes in a tent, and only if you’re that kind of person. Not everyone is.
No, you just said that the visitor to the tent is ‘playing along’. But the thought experiment is about someone who trusts the scientist , and playing along is not trusting the scientist .” Yes, exactly the kind of thing that I’ve been cautioning you about. Don’t be one of those people. There are people who trust you and select among the options you give them for whatever reason you offer, no matter how contrary to existing evidence (e.g., of their own free will) the option selection is. Their decision strategies do not include acting on good evidence or understanding causality very well. And such people would likely leave with just the opaque box, and, if the scientist is to be believed, will be rewarded for it with a million dollars. However, they fall for every magic trick, and do not gather evidence carefully.
No, no, it’s not a magic trick. The thought experiment says that the scientist is really checking the brain scanning machine and putting the money in the opaque box, or not, according to what the machine says, and then making the same claims to every visitor about how the whole experiment works, and asking the visitors to participate according to the scientist’s simple instructions. All along you’ve been distorting this every which way. The machine could fail, but we know it succeeds. It succeeds with everybody, and the point of the thought experiment is just to think through what you ought to do in that situation, to get the most money, if you agree to the scientist’s terms. The only way to prove the scientist is wrong as a single visitor is to do everything right, leave with the opaque box only, but then find nothing inside. But we know that never happens. I see. Yeah. OK! I think you’ve changed the experiment a little though. Before, it was just, walk in, and get predicted. Now, it’s walk in and choose to cooperate, and the scientist is telling the truth, and the brain scanning machine appears to work, and then get predicted. And you can’t just play along, a visitor has to believe the scientist, and for good reason, in order to for people to draw any conclusions about what the experiment means.
“What? No, you don’t have to believe the scientist. You can play along, get some money, just choose one or two boxes. That’s what everyone should do, and the experiment shows it.” Some people would do that. We might as well flip a coin, or just pretend that we have reason to believe the scientist’s claim for causal reasons, and make up a causal reason. How about something like, “Hey, that million in the opaque box is like Schrodinger’s cat.” Maybe we make up a causal reason in hindsight after we find that million in the opaque box and leave the clear box behind. However, “rational” people would only follow the instructions if they believed the evidence warranted it, then those “rational” people would explore the reasons why. As far as I know, this thought experiment is supposed to mean that evidential and causal decision theory can conflict, but in fact, I think it only means that causal decisions can be revised based on new evidence. For example, brain scanner prediction, mind control, subtle influence by the scientist, money teleportation, time travel by someone observing you and taking the money back in time, or an unlikely string of random predictive success by a totally useless brain scanner, all potential explanations of the reason that the scientist’s machine would appear to work, if you decided to test if it works by taking the opaque box.
So what? Then the thought experiment only applies to people who follow instructions and trust the scientist and have good reason to trust the scientist’s claims, if you accept the idea that it’s supposed to distinguish evidential and causal decision theory. All your discussion of it managed to do was convince me that the thought experiment is well-designed, but also plausible. I think brain scanners like that, that work specific to a context where you choose to follow instructions, are plausible. If they were built, then setting something like this up in real life would be easy.” Yeah, and expensive. Plenty of people would take the opaque box only. I think this makes me want to revise the definition of “plausible” a little bit, for myself. I would just leave the tent. Julia Galef also thinks that such devices as brain scanners are plausible, or she claimed that, in her old video. So you’re in good company.
And thanks!
There’s this thing, “the repugnant conclusion”. It’s about how, if you use aggregate measures of utility for people in a population, and consider it important that more people each getting the same utility means more total utility, and you think it’s good to maximize total utility, then you ought to favor giant populations of people living lives barely worth living.
Yes, it’s a paradox. I don’t care about it because there’s no reason to want to maximize total utility by increasing a population’s size that I can see. However, by thinking so, I’m led down a different path. I’m not a utilitarian, but I check in with the utilitarian perspective to understand some things better.
The form of utilitarianism that I introduce below is my best utilitarian perspective. I created it as part of rejecting the repugnant conclusion. I’ll let you ask the interested questions, if you have any, lol. Here it is.
Imagine an accounting system that, for each person, measures the utility, positive and negative, of that person’s actions for other people. Your own personal utilitarian ledger, but lets assume someone else keeps it for you. That other person knows every action you take and what positive or negative utility that your actions create.
If the term “utility” confuses you, think of other terms, like:
benefit or harm
happiness or suffering
gain or loss
pleasure or pain
improvement or decline
For example, positive utility that you create for someone could be an improvement in their health.
Your ledger holds information about what you cause people everywhere, millions, billions, even trillions of people, now and in the future. Well, OK, that’s only if you consider individuals from various other species as deserving a page in your ledger.
How would I make this ledger work? Here’s what I would do:
Put aside the mathematical convenience of aggregate measures in favor of an individual accounting of utility. If you can track the utility you cause for even two other people, your ledger keeper should be able to do it for two hundred billion, right? Sure.
Set up a few rules to handle when people cease to exist. Those rules should include:
Once a person’s existence ends, you can no longer create utility for that person. Accordingly, there should be no new entries onto your ledger about that person. Prior utility accounting associated with a person from when they were alive can be kept but not altered unless to better reflect utility that you created for the person when the person was still living.
Ledger entries associated with people who were expected to be conceived but are no longer expected to be conceived must be deleted entirely from the ledger, because those entries apply to a never-existent person. They are bogus.
Entries about the utility of termination of existence (death) that you (inadvertently) cause others should be full and complete, applying to all those affected by a death who are not the dead person, including everyone still living and who will be conceived that get positive or negative utility from the person’s death.
The suffering or happiness involved in the person’s going through the process of dying should also be considered negative or positive utility and accounted for accordingly. A painful, slow death is a large negative harm to inflict on someone, whereas a quick, painless death in the presence of loving family is an improvement over a painful slow death, all other things equal.
Do not record death itself as a change in utility. The fact of death itself should not be recorded as a negative (or positive) utility applying to the now nonexistent person. There are still all the harms of death noted previously. Aside from those however, the only change recorded on the ledger to the dead person’s utility is that there are no longer events generating new utility for the person because the person no longer exists.[1]
Do not record intended consequences as creating utility just because they were intended. That is a different form of morality tracking, to do with keeping a record of a person’s character. On the utilitarian ledger, only actual utility gets recorded in an entry.
Other than those changes, I think you can go ahead and practice utilitarianism as you otherwise would, that is, doing the the greatest good for the greatest number, and considering all people as equally deserving of consideration.
Utilitarianism developed in that way does not offer the typical problems of:
aggregate measures (average, total, variance) screwing up determination of utility maximization for many individuals
bogus accounting of utility intended for nonexistent or never-existent people.
bogus accounting of utility intended to be created for existent people but not actually created.
This personal utilitarian ledger only tells you about actual utility created in a single shared timeline for a population of individuals. Intentions and alternatives are irrelevant. Disliking death or liking for having children are similarly irrelevant unless contradiction of those values is considered a negative utility created for existent people. Of course there’s still the harms to existent others associated with death or absence of conception that are recorded in the ledger. And, the welfare of the population as a whole is never actually considered.
An extension to the accounting ledger, one that tracks consequences of actions for your utility, would record your actions including such interesting ones as actions to make hypothetical people real or to extend the lives of existing people. The extension would record actual consequences for you even if those actions create no utility for other existing people. You might find this extension useful if, as someone with a ledger, you want to treat your own interests as deserving equal consideration compared to other’s interests.
For me, a utilitarian ledger of this sort, or a character ledger that tracks my intentions and faithfully records evidence of my character, would provide a reference point for me to make moral judgments about me. Not a big deal, but when you look at something like the repugnant conclusion, you could ask yourself, “Who does this apply to and how?” I don’t require that I practice utilitarianism, but in a context where utilitarian considerations apply, for example, public policy, I would use this approach to it. Of course, I’m no policy-maker, so this ledger is little more than a thought experiment.
[1] The only exception would be error-correction events to revise old utility information from when the person was living. Error-correction events only occur when the ledger keeper corrects a mistake.
Heuristics that identify self-serving arguments
As close as I get to epistemic status: “Mm, yes, this type of argument that I’m making is usually self-serving”
Common heuristics to let me assume an argument is self-serving or lacks intellectual honesty include:
the argument justifies (rationalizes) a vice. For example, I watch too much junk TV on netflix, and commonly tell myself that I do it to relax. However, it is problematic, and I have better ways to relax than watch movies.
the argument justifies a plan that offers social acceptance, career development, or another important gain. I can backtrack from any convenient conclusion and ask myself, “Hmm, am I leaving out some important bit of contrary information, for example, that bears on other people’s well-being or some principle I keep or rule that I follow?”
the argument’s conclusion makes me feel sad or lonely or angry or ashamed about accepting the argument’s conclusion. Sometimes intellectual honesty is revealed by my feelings, but not always. Deciding on what my feelings mean takes some introspection and my ignorance makes this heuristic unreliable. Sometimes, though, a bothersome feelings is an important signal of a personal standard or preference that is not being met by my own rationalized behavior.
the argument’s entire structure changed with a change in what the conclusion would mean for me. For example, my argument for why I always take the trash out every evening can be quickly followed by why it’s OK to let the trash fester in the bin, if it happens to be raining hard outside and I don’t feel like getting wet.
the argument moves my attention to something else, and doing so changes my feelings for the better, but at the cost of my ignoring what was bothering me. My feelings don’t always correspond to a specific focus of my attention. I can focus on something else, and feel good about that, but as a consequence, ignore something that feels less good. At that point, I need other means, for example, cognitive aids or friendly helpers that remind me of what is relevant to problem-solve and give me pointers on how to deal with it.
the argument justifies idiocy. I keep a short list of human tendencies that I consider idiotic (for example, sadism), and most people don’t display them most of the time, but any argument that justifies them or enables them is suspect to me. Which actually starts a more difficult intellectual enquiry than one would expect.
a person stating their own position holds a dissonant perspective for a later time People typically put effort in one direction and then take a break from that to be somebody else. For example, they trade in their priest’s collar for a roughneck jacket and go act like a jerk. By day, they’re a mild-mannered gentle person, but by night, they’re a gun-toting crazy vigilante! Or whatever. Point is, they adopt principles like it’s a part-time job or a uniform to wear. Statements about principles from those people, wonderful though those people may be, are self-serving or outright lies, typically.
Muddling Solutions To New Problems
The list of minor examples of muddling solutions includes:
driving through traffic on the freeway and reaching a standstill. Bored,you turn on a podcast.
visiting a doctor, she informs you that you have a benign but growing tumor. Upset, you schedule an inexpensive surgery to remove it.
coming home, you find a tree branch broke through your attic window. Annoyed, you call a repairman to replace the window.
walking from your home to a nearby convenience store, you step in some smelly dog poop. Upset, you scrape some of it off with a twig and wash your shoe bottom after turning around and reaching home.
turning on the television, you see an emergency broadcast of strong winds in your area. Alarmed, you close the now-rattling windows.
sitting at your desk at work, the power goes out and your UPS starts beeping. Concerned, you quickly save your work on your computer.
preparing dinner for friends, you receive a text from a relative wanting dinner as well. Obligated, you agree and prepare more food.
These are small examples of muddling solutions. In the middle of your life, an unexpected problem arises. It could be worse, but you have some emotional response to it as you engage with the problem to solve it. The available system supports you in doing so. You take the actions you needed and succeed, and can then go on with your life.
These are not immediate life and death decisions, but if the problem were to go on, they could inconvenience you or others in various ways. Either way, you muddle through. Perhaps the solution solves the problem completely, perhaps not. Other muddling solutions might be required, as events warrant. Whatever the case, there’s a typical pattern we know to follow. As problems arise, we solve them, and move forward.
Muddling through those situations toward a solution involves:
doing some routine process.
receiving disruptive news.
responding with a resourceful action (or not).
returning to our routine process.
repeating the above as much as necessary for as long as we can.
That’s what it means to muddle through.
What happens when we cannot muddle through
If you review the list of examples that I gave, you’ll see that each example relies on some resources being present, for example, a repairperson and money to pay her, or a local hospital that will do an inexpensive surgery, or the health insurance to pay for the surgery. When considering a scenario, sensitivity in the scenario to what happens when muddling solutions cannot be carried out gives you a sense of how civilization can fail as well.
What happens when the water doesn’t come from the tap? When the gas station has no gas? When insurance is not available at all? When there are no police or firemen or open hospitals or pharmacies?
Everyone understands the answers to those questions easier than abstractions about risk management or high-impact events.
In general, the idea that others will mitigate a crisis for you makes sense, that is why we have government and public institutions and businesses in place. They help us muddle through. However, depending on the root causes of a crisis and the size of the crisis, that mitigation might not be possible. In that case, civilization has collapsed.
The collapse of civilization is not that difficult to think through, put in terms of muddling through life, and then finding that everyone cannot because our civil, economic, or technological support systems have stopped working.
Truth-building vs Manipulating
When exploring an area of knowledge with others, we can perform in roles such as:
Truth-building roles: mutual truth-seeking involving exchange of truthful information
scout (explores information and develops truthful information for themselves)
soldier (attacks and defends ideas to confirm their existing beliefs)
Manipulative roles: at least one side seeking to manipulate the other without regard for the other’s interests
salesperson (sells ideas and gathers information)
actor/actress (performs theatrics and optionally gathers information)
The Scout and Soldier model breaks down during communication when people believe that:
the truth is cheap and readily accessible, and so communication about important topics should serve other purposes than truth-building.
everyone else seems to be engaged in manipulating, either through lying or withholding information
withdrawal from engagement with others seems appropriate
joining in with other’s theatrics or sales efforts seems appropriate.
One of several lessons I draw from Galef’s excellent work is the contrast between those who are self-serving and those who are open to contradiction by better information. However, a salesperson can gather truthful information from you, like a scout, develop an excellent map of the territory with your help, and then lie to your face about the territory, leaving you with a worse map than before. Persons in the role of actors can accomplish many different goals with their theatrics, none of which help scouts develop truthful information.
Applying Heuristics about Aversive Experience Without Regard for Theories of Consciousness
TL;DR
Consciousness should have an extensional definition only. Misconstrual or reconception of the meaning of consciousness is an error. Robots, software agents, and animals can suffer aversive experience. Humans have heuristics to judge whether their own behavior inflicts aversive experience on other beings.
Those heuristics include:
that some behavior is damaging to the entity
that an entity can feign aversive experience
that reasonable people think some behavior is aversive
that the entity has something like system 1 processing.
Consciousness
Typically, an extensional definition of consciousness is a list of measured internal activity or specific external behavior associated with living people. Used correctly, “consciousness” has an extensional definition only. The specific items in the list to which “consciousness” refers depends on the speaker and the context.
In a medical context, a person:
shows signs of consciousness (for example, blinking, talking).
loses consciousness (for example, faints).
In an engineering context, a robot:
lacks all consciousness (despite blinking, talking).
never had consciousness (despite having passed some intelligence tests).
When misconstrued, the term “consciousness’ is understood to refer to an entity separate from the entity whose behavior or measured internal activity[1] the term describes (for example, consciousness is thought of as something you can lose or regain or contain while you are alive).
When reconceived, a user of the term “consciousness” summarizes the items with an intensional definition.
An extensional definition of consciousness can mismatch the intensional definition.
For example, a nurse might believe that a person still has not actually regained their consciousness after a medical procedure that brings the person back to life 20 minutes after death even though the person now appears alert, speaks, eats, and is apparently mentally healthy.
Another example would be a robot that demonstrates external behaviors and measured internal activity associated with human-like intelligence but that humans assume is not in fact a person.
Without an intensional definition of consciousness, dialog about whether aversive experience happens can fail. However, if you accept that your own subjective experience is real, and grant that others can have similar experience, then you can still apply heuristics to decide whether other beings have aversive experience. Those heuristics can build on your own experience and common-sense.
Heuristics about Aversion
I believe that humans will mistreat robotic or software or animal entities. Humans could try to excuse the mistreatment with the belief that such entities do not have consciousness or aversive experience. This brings to mind the obvious question: what is aversive experience?
Here are several heuristics:
If behavior damages an entity, then the behavior causes aversive experience for the entity.
if an entity can feign or imitate[2] aversive experience, then it can experience aversive experience.
if reasonable people reasonably interpret some actions done to the entity as aversive to an entity, then those actions are aversive to the entity.
if the entity has something like system 1 processing[3] , then it can experience aversive experience.
I’m sure there are more common-sense heuristics, but those are what I could think of that might forestall inflicting aversive experience on entities whose subjective experience is a subject of debate.
For a human, measured internal activity is stuff like brainwaves or peristaltic action.
I have noticed that sometimes humans believe or at least assert that other people imitate or feign emotions and internal experience.
Processes that it does not choose to follow but that instead yield their results for further processing. It might be necessary to assume that at least one of these processes runs in parallel to another which the entity can edit or redesign.
Most Ethical Dilemmas are Actually a Conflict of Selfish Interests with Altruistic Interests
Ethicists seem to value systems of ethics (I will also call those moral systems) that are rational or applicable in all contexts in which they would indicate a choice of action. For example, utilitarianism gets a lot of flack in some unlikely thought experiments where it seems unintuitive or self-contradictory.
I believe that the common alternative to rational (enough) moral or ethical action is amoral or selfish action. If you apply a moral system to making some decisions, surely you apply an amoral system to the rest, and that amoral system lets you serve selfish interests.
If so, then it’s important to make that explicit. For example, “Yeah, so I was trying to decide this with my usual ethical heuristics, like, let’s maximize everybody’s utility in this situation, but then it seemed hard to do that, and so I went with what I’d like out of this situation.”
And that’s my main suggestion, just don’t deceive yourself that your ethical system has you in knots. It’s probably your selfish interests in conflict with your altruistic interests.
For more of that, read on.
A good question would be whether it was genuinely difficult to decide what to ethically do, or whether it wasn’t that difficult, but the decision conflicted with your selfish interests. Did the ethical choice make you feel bad? Was it unfulfilling? Did it seem like it might harm you somehow? If so, then your selfish interests got in the way.
There’s even models of how to negotiate among competing moral paradigms now. As if you are really in a position to arbitrate which moral system you will use!? To me it’s absurd to believe that people apply their choices among ethical systems with any honesty, when they already have so many problems behaving ethically according to any system.
My heuristic based on extensive personal experience as well as repeated observation of people is that when serious ethical dilemmas come up, it is because a person is facing a choice between selfish and moral interests, not because they face a conflict between two ethical systems to which they give partial credence.
People can figure out what the right thing to do is pretty easily, but they have a hard time figuring out when they’re being self-serving or selfish.
To me, you can drop your moral reasoning at any time and choose your selfish reasoning, but not because that’s OK , moral, or reasonable. Instead, it’s because the application of ethical systems is more of an empirical question than anything else.
There are only a few linguistic and emotional confusions that people have around morality and selfishness. Take the word “care.” It’s confusing. If I care about someone, is it because:
their interests matter to me
my interests depend on their experience or behavior
If you take away my causal experience of the person:
their lovableness
the infectiousness of their happiness
their positive treatment of me
their personal significance in my life and memory
then what’s left to care about? Just them, and that’s not compelling me anymore with all the effects of that person’s experience and behavior gone. Then what should I do? Well obviously, keep being ethical toward them is the ethical thing to do.
And it seems you have to take away all those positive dependencies, all those effects someone has on you, just to hear someone say something like:
Yeah, but that’s entirely too late to note that most decisions are about selfish and ethical interests, and that those two sets of interests are orthogonal. You can satisfy either or both or neither with every decision you make. If your goal in analyzing ethical systems is to be a better decision-maker overall, that probably starts with recognizing what interests are actually in play in your decisions all the time.
As an observer of my actions, keeping track of the consequences of my actions is a better proxy for my use of a moral system then keeping track of my stated intentions of my actions. However, as a performer of actions, I have to keep track of action intentions and action consequences, and try to align them as I get feedback on the outcomes of my actions.
There’s no moral argument for satisfying your own interests aside from how you might satisfy other’s interests somehow. Similarly, there’s no selfish argument for satisfying other’s interests aside from how you might satisfy your own interests somehow. Most of the hard work in discussion of ethical interests is in how to separate selfish interests from altruistic interests.