You didn’t mention the Long Reflection, which is another point of contact between EA and religion. The Long Reflection is about figuring out what values are actually right, and I think it would be odd to not do deep study of all the cultures available to us to inform that, including religious ones. Presumably, EA is all about acting on the best values (when it does good, it does what is really good), so maybe it needs input from the Long Reflection to make big decisions.
James_Banks
I’ve wondered if it’s easier to align AI to something simple rather than complex (or if it’s more like “aligning things at all is really hard, but adding complexity is relatively easy once you get there”). If simplicity is more practical, then training an AI to do something libertarian might be simpler than to pursue any other value. The AI could protect “agency” (one version of that being “ability of each human to move their bodies as they wish, and the ability to secure their own decision-making ability”). Or, it might turn out to be easier to program AI to listen to humans, so that AI end up under the rule of human political and economic structures, or some other way to aggregate human decision-making. Under either a libertarian or human-obeying AI programming, humans can pursue their religions mostly as they always have.
This is sort of a loose reply to your essay. (The things I say about “EA” are just my impressions of the movement as a whole.)
I think that EA has aesthetics, it’s just that the (probably not totally conscious) aesthetic value behind them is “lowkeyness” or “minimalism”. The Forum and logo seems simple and minimalistically warm, classy, and functional to me.
Your mention of Christianity focuses more on medieval-derived / Catholic elements. Those lean more “thick” and “nationalistic”. (“Nationalistic” like “building up a people group that has a deeper emotional identity and shared history”, maybe one which can motivate the strongest interpersonal and communitarian bonds). But there are other versions of Christianity, more modern / Protestant / Puritan / desert. Sometimes people are put off by the poor aesthetics of Protestant Christianity, but at some times and in some contexts, people prefer Protestantism over Catholicism, despite its relative aesthetic poverty. I think one set of things that Puritan (and to an extent Protestant), and desert Christianities have in common is self-discipline, work, and frugality. Self-discipline, work, and frugality seem to be a big part of being an EA, or at least in EA as it has been up to now. So maybe in that sense, EA (consciously or not) has exactly the aesthetic it should have.
I think aesthetic lack helps a movement be less “thick” and “nationalistic” and avoiding politics is an EA goal. (EA might like to affect politics, but avoid political identity at the same time.) If you have a “nice looking flag” you might “kill and die” for it. The more developed your identity, the more you feel like you have to engage in “wars” (at least flame wars) over it. I think EA is conflict-averse and wants to avoid politics (maybe it sometimes wants to change politics but not be politically committed? or change politics in the least “stereotypically political” way possible, least “politicized”?). EA favors normative uncertainty and being agnostic about what the good is. So EAs might not want to have more-developed aesthetics, if those aesthetics come with commitments.
I think the EA movement as it is is doing (more or less) the right thing aesthetically. But, the foundational ideas of EA (the things that change people’s lives so that they are altruistic in orientation and have a sense that there is work for them to do and that they have to do it “effectively”, or maybe that cause them to try to expand their moral circles) are ones that might ought to be exported to other cultures, perhaps to a secular culture that is the “thick” version of EA, or to existing more-”thick” cultures, like the various Christian, Muslim, Buddhist, Hindu, etc. cultures. A “thick EA” might innovate aesthetically and create a unique (secular, I assume) utopian vision in addition to the numerous other aesthetic/futuristic visions that exist. But “thick EA” would be a different thing than the existing “thin EA”.
I hadn’t heard of When the Wind Blows before. From the trailer, I would say Testament may be darker, although a lot of that has to do with me not responding to animation (or When the Wind Blows’ animation) as strongly as to live-action. (And then from the Wikipedia summary, it sounds pretty similar.)
I would recommend Testament as a reference for people making X-risk movies. It’s about people dying out from radiation after a nuclear war, from the perspective of a mom with kids. I would describe it as emotionally serious, and also it presents a woman’s and “ordinary person’s” perspective. I guess it could be remade if someone wanted to, or it could just be a good influence on other movies.
Existential risk might be worth talking about because of normative uncertainty. Not all EAs are necessarily hedonists, and perhaps the ones who are shouldn’t be, for reasons to be discovered later. So, if we don’t know what “value” is, or, as a movement, EA doesn’t “know” what “value” is, a priori, we might want to keep our options open, and if everyone is dead, then we can’t figure out what “value” really is or ought to be.
If EA has a lot of extra money, could that be spent on incentivizing AI safety research? Maybe offer a really big bounty for solving some subproblem that’s really worth solving. (Like if somehow we could read and understand neural networks directly instead of them being black boxes.)
Could EA (and fellow travelers) become the market for an AI safety industry?
I wonder if there are other situations where a person has a “main job” (being a scientist, for instance) and is then presented with a “morally urgent situation” that comes up (realizing your colleague is probably a fraud and you should do something about it). The traditional example is being on your way to your established job and seeing someone beaten up on the side of the road whom you could take care of. This “side problem” can be left to someone else (who might take responsibility, or not) and if taken on, may well be an open-ended and energy draining project that has unpredictable outcomes for the person deciding whether to take it on. Are there other kinds of “morally urgent side problems that come up ” and are there any better or worse ways to deal with the decision whether to engage?
The plausibility of this depends on exactly what the culture of the elite is. (In general, I would be interested in knowing what all the different elite cultures in the world actually are.) I can imagine there being some tendency toward thinking of the poor / “low-merit”, as being superfluous, but I can also imagine superrich people not being that extremely elitist and thinking “why not? The world is big, let the undeserving live.” or even things which are more humane than that.
But also, despite whatever humaneness there might be in the elite, I can see there being Molochian pressures to discard humans. Can Moloch be stopped? (This seems like it would be a very important thing to accomplish, if tractable.) If we could solve international competition (competition between elite cultures who are in charge of things), then nations could choose to not have the most advanced economies they possibly could, and thus could have a more “pro-slack” mentality.
Maybe AGI will solve international competition? I think a relatively simple, safe alignment for an AGI , would be for one that was the servant of humans—but which ones? Each individual? Or the elites who currently represent them? If the elites, then it wouldn’t automatically stop Moloch. But otherwise it might.
(Or the AGI could respect the autonomy of humans and let them have whatever values they want, including international competition, which may plausibly be humanity’s “revealed preference”.)
This is kind of like my comment at the other post, but it’s what I could think of as feedback here.
--
I liked your point IV, that inefficiency might not go away. One reason it might not is because humans (even digital ones) would have something like free will, or caprice, or random preferences, in the same way that they do now. Human values may not behave according to our concept of “reasonable rational values” over time, as they evolve. In human history, there have been impulses toward the rational and the irrational. So they might for some reason prefer something like “authentic” beef from a real / biological cow (rather than digital-world simulated beef), or wish to make some kind of sacrifice of “atoms” for some weird far future religion or quasi-religion that evolves.
--
I don’t know if my view is a mainstream one in longtermism, but I tend to think that civilization is inherently prone to fragility, and that it is uncertain that we will ever have faster-than-light travel or communications. (I haven’t thought a lot about these things, so maybe someone can show me a better way to see this.) If we don’t have FTL, then the different planets we colonize will be far apart enough to develop divergent cultures, and generally be unable to be helped by others in case of trouble. Maybe the trouble would be something like an asteroid strike. Or maybe it would be an endogenous cultural problem, like a power struggle among digital humans rippling out into the operation of the colony.
If this “trouble” caused a breakdown in civilization on some remote planet, it might impair their ability to do high tech things (like produce cultured meat). If there is some risk of this happening, they would probably try to have some kind of backup system. The backup system could be flesh-and-blood humans (more resilient in a physical environment than digital beings, even ones wedded to advanced robotics), along with a natural ecosystem and some kind of agriculture. They would have to keep the backup ecosystem and humans going throughout their history, and then if “trouble” came, the backup ecosystem and society might take over. Maybe for a while, hoping to return to high-tech digital human society, or maybe permanently, if they feel like it.
At that point, it all depends on the culture of the backup society staying true to “no factory farming” as to whether they don’t redevelop factory farming. If they do redevelop factory farming, then that would be part of the far future’s “burden of suffering” (or whatever term is better than that).
I guess one way to prevent this kind of thing from happening (maybe what longtermists already suggest), is to simply assume that some planets will break down, and try to re-colonize them if that happens, instead of expecting them to be able to deal with their own problems.
I guess if there isn’t such a thing as FTL, our ability to colonize space will be greatly limited, and so the sheer quantity of suffering possible will be a lot lower (as well as whatever good sentience gets out of existence). But, say, we only colonize 100 planets over the remainder of our existence (under no-FTL), and 5% of them re-develop factory farming, that’s still five times as many as Earth today.
This isn’t a very direct response to your questions, but is relevant, and is a case for why there might be a risk of factory farming in the long-term future. (This doesn’t address the scenarios from your second question.) [Edit: it does have an attempt at answering your third question at the end.]
--
It may be possible that if plant-based meat substitutes are cheap enough and taste like (smell like, have mouth feel of, etc.) animal-derived meat, then it won’t make economic sense to keep animals for that purpose.
That’s the hopeful take, and I’m guessing maybe a more mainstream take.
If life is always cheaper in the long-run for producing meat substitutes (the best genetic engineering can always produce life that can out-compete the best non-life lab techniques), would it have to be sentient life, or could it be some kind of bacteria or something like that? It doesn’t seem to me that sentience is helpful in making animal protein, and probably just imposes some cost.
(Another hopeful take.)
A less hopeful take: One advantage that life has over non-life, and where sentience might be an advantage, is that it can be let loose in an environment unsupervised and then rounded up for slaughter. So we could imagine “pioneers” on a lifeless planet letting loose some kind of future animal as part of terraforming, then rounding them up and slaughtering them. This is not the same as factory farming, but if the slaughtering process (or rounding-up process) is excessively painful, that is something to be concerned about.
My guess is that one obstacle to humans being kind to animals (or being generous in any other way) has to do with whether they are in “personal survival mode”. Utilitarian altruists might be in a “global survival mode” and care about X-risk. But, when times get hard for people, personally, they tend to become more of “personal survival mode” people. Maybe being a pioneer on a lifeless planet is a hard thing that can go wrong (for the pioneers), and the cultures that are formed by that founding experience will have a hard time being fully generous.
Global survival mode might be compatible with caring about animal welfare. But personal survival mode is probably more effective at solving personal problems than global survival mode (or there is a decent reason to think that it could be), even if global survival mode implies that you should care about your own well-being as part of the whole, because personal survival mode is more desperate and efficient, and so more focused and driven toward the outcome of personal survival. Maybe global survival mode is sufficient for human survival, but it would make sense that personal survival mode could outcompete it and seem attractive when times get hard.
Basically, we can imagine space colonization as a furtherance of our highest levels of civilization, all the colonists selected for their civilized values before being sent out, but maybe each colony would be somewhat fragile and isolated, and could restart at, or devolve to, a lower level of civilization, bringing back to life in it whatever less-civilized values we feel we have grown past. Maybe from that, factory farming could re-emerge.
If we can’t break the speed of light, it seems likely to me that space colonies (at least, if made of humans), will undergo their own cultural evolution and become somewhat estranged from us and each other (because it will be too hard to stay in touch), and that will risk the re-emergence of values we don’t like from human history.
How much of cultural evolution is more or less an automatic response to economic development, and how much is path-dependent? If there is path-dependency, we would want to seed each new space colony with colonists who 1) think globally (or maybe “cosmically” is a better term at this scale), with an expanded moral circle, or more important, a tendency to expand their moral circles; 2) are not intimidated by their own deaths; 3) maybe have other safeguards against personal survival mode; 4) but still are effective enough at surviving. And try to institutionalize those tendencies into an ongoing colonial culture. (So that they can survive, but without going into personal survival mode.) For references for that seeded culture, maybe we would look to past human civilizations which produced people who were more global than they had to be given their economic circumstances, or notably global even in a relatively “disestablished” (chaotic, undeveloped, dysfunctional, insecure) or stressed state or environment.
(That’s a guess at an answer to your third question.)
I don’t think your dialogue seems creepy, but I would put it in the childish/childlike category. The more mature way to love is to value someone in who they are (so you are loving them, a unique personal being, the wholeness of who they are rather than the fact that they offer you something else) and to be willing to pay a real cost for them.
I use the terms “mature” and “childish/childlike” because (while children are sometimes more genuinely loving than adults), I think there is a natural tendency to lose some of your taste for the flavors, sounds, feelings of excitement, and so on, you tend to like as a child, and to be forced to pay for people, and to come to love them more deeply (more genuinely) because of it, as you grow older.
“Person X gives me great pleasure, a good thing” and “Person X is happy, another good thing”—Is Person X substitutable for an even greater pleasure? Like, would you vaporize Person X (even without causing them pain), so that you could get high/experience tranquility if that gave you greater pleasure? Or from a more altruistic or all-things-considered perspective, if that would cause there to be more pleasure in the world as a whole? If you wouldn’t, then I think there’s something other than extreme hedonism going on.
I do think that you can love people in the very act of enjoying them (something I hadn’t realized when I wrote the comment you replied to). I am not sure if that is always the case when someone enjoys someone else, though. The case I would now make for loving someone just because you enjoy them would be something like this:
“love” of a person is “valuing a person in a personal way, as what they are, a person”;
you can value consciously and by a choice of will;
or, you can value unconsciously/involuntarily by being receptive to enhancement from them. Your body (or something like your body) is in an attitude of receiving good from them. (“Receptivity to enhancement” is Joseph Godfrey’s definition of trust from Trust of People, Words, and God.)
being receptive to enhancement (trusting) is (or could be) your body saying “I ask you to benefit me with real benefit, there is value in you with which to bring me value, you help me with a real need I have, a real need that I have is when there’s something I really lack (when there’s a lack of value in my eyes), you are valuable in bringing me value, you are valuable”.
if the receptivity that is a valuing is receptive to a “you” that to it is a person (unique, personal, unsubstitutable), then you value that person in who they are, and you love them
It’s possible that creepy people enjoy other people in a way that denies that they are persons and the other persons’ unique personhood. Or, they only enjoy without trusting (or only trusting in a minimal way). Fungibility implies a control over your situation and a certain level of indifference about how to dispose of things. (Vulnerability (deeper trust) inhibits fungibility.) The person who is enjoyed has become a fungible “hedonic unit” to the creepy person.
(Creepy hedonic love: a spider with a fly wrapped in silk, a fly which is now a meal. Non-creepy hedonic love: a calf nursing from a cow, a mutuality.)
A person could be consciously or officially a thorough-going hedonist, but subconsciously enjoy people in a non-creepy way.
I think maturity is like a medicine that helps protect against the tendency of the childish/childlike to sometimes become creepy.
Would it be possible for some kind of third party to give feedback on applications? That way people can get feedback even if hiring organizations find it too costly. Someone who was familiar with how EA organizations think / with hiring processes specifically, or who was some kind of career coach, to be able to say “You are in the nth percentile of EAs I counsel. It’s likely/unlikely that if you are rejected it’s because you’re unqualified overall.” or “Here are your general strengths and weaknesses as someone applying to this position, or your strengths and weaknesses as someone seeking a career in EA overall.” Maybe hiring organizations could cooperate with such third parties to educate them on what the organization’s hiring criteria / philosophy are, so that they have something like an inside view.
Suppose there is some kind of new moral truth, but only one person knows it. (Arguably, there will always be a first person. New moral truth might be the adoption of a moral realism, the more rigorous application of reason in moral affairs, an expansion of the moral circle, an intensification of what we owe the beings in the moral circle, or a redefinition of what “harm” means. )
This person may well adopt an affectively expensive point of view, which won’t make any sense to their peers (or may make all too much sense). Their peers may have their feelings hurt by this new moral truth, and retaliate against them. The person with the new moral truth may endure an almost self-destructive life pattern due to the moral truth’s dissonance with the status quo, which will be objected to by other peers, who will pressure that person to give up their moral truth and wear away at them to try to “save” them. In the process of resisting the “caring peer”, the new-moral-truth person does things that hurt the “caring peer”’s feelings.
There are at least two ideologies at play here. (The new one and the old one, or the old ones if there are more than one.) So we’re looking at a battle between ideologies, played out on the field of accounting personal harm. Which ideology does a norm of honoring the least-cost principle favor? Wouldn’t all the harm that gets traded back and forth simply not happen if the new-moral-truth person just hadn’t adopted their new ideology in the first place? So the “court” (popular opinion? an actual court?) that enforces the least-cost principle would probably interpret things according to the status quo’s point of view and enforce adherence to the status quo. But if there is such a thing as moral truth, then we are better off hearing it, even if it’s unpopular.
Perhaps the least-cost principle is good, but there should be some provision in a “court”for considering whether ideologies are true and thus inherently require a certain set of emotional reactions.
The $100 an item market sounds like fair trade. So you might compete with fair trade and try to explain why your approach is better.
The $50,000 an item market sounds harder but more interesting. I’m not sure I would ever buy a $50,000 hoodie or mug, no matter how much money I had or how nice the designs on them were. But I could see myself (if I was rolling in money and cared about my personal appearance) buying a tailored suit for $50,000, and explaining that it only cost $200 to make (or whatever it really does) and the rest went to charity. You might have to establish your brand in a conventional way (tailored suits, fancy dresses, runway shows, etc.) and be compelling artistically, as well as have the ethical angle. You would probably need both to compete at that level, is my guess.
This kind of pursuit is something I am interested in, and I’m glad to see you pursue it.
One thing you could look for, if you want, is the “psychological constitution” being written by a text. People are psychological beings and the ideas they hold or try to practice shape their overall psychological makeup, affecting how they feel about things and act. So, in the Bhagavad-Gita, we are told that it is good to be detached from the fruits of action, but to act anyway. What effect would that idea have if EAs took it (to the extent that they haven’t already)? Or a whole population? (Similarly with its advice to meditate.) EAs psychologically relate with the fruits of their action, in some way, already. The theistic religions can blend relationship with ideals or truth itself with relationship with a person. What difference would that blending make to EAs or the population at large? I would guess it would produce a different kind of knowing—maybe not changing object-level beliefs (although it could), but changing the psychology of believing (holding an ideal as a relationship to a person or a loyalty to a person rather than an impersonal law, for instance).
One possibility that maybe you didn’t close off (unless I missed it) is “death by feature creep” (more likely “decline by feature creep”). It’s somewhat related to the slow-rolling catastrophe, but with the assumption that AI (or systems of agents including AI, also involving humans) might be trying to optimize for stability and thus regulate each other, as well as trying to maximize some growth variable (innovation, profit).
Our inter-agent (social, regulatory, economic, political) systems were built by the application of human intelligence, to the point that human intelligence can’t comprehend the whole, making it hard to solve systemic problems. So in one possible scenario, humans plus narrow AI might simplify the system at first, but then keep adding features to the system of civilization until it is unwieldy again. (Maybe a superintelligent AGI could figure it out? But if it started adding its own features, then maybe not even it understand what had evolved.) Complexity can come from competitive pressures, but also from technological innovations. Each innovation stresses the system, until the system can assimilate it more or less safely, by means of new regulation (social media that messes up politics unless / until we can break or manage some of its power).
Then, if some kind of feedback loop leading toward civilizational decline begins, general intelligences (humans, if humans are the only general intelligences) might be even less capable of figuring out how to reverse course than they currently are. In a way, this could be narrow AI as just another important technology, marginally complicating the world. But also, we might use narrow AI as tools in AI/AI+humans governance (or perhaps in understanding innovation), and they might be capable of understanding things that we cannot (often things that AI themselves made up), creating a dependency that could contribute in a unique way to a decline.
(Maybe “understand” is the wrong word to apply to narrow AI but “process in a way sufficiently opaque to humans” works and is as bad.)
One thought that re-occurs to me is that there could be two, related EA movements, which draw from each other. No official barrier to participating in both (like being on LessWrong and EA Forum at the same time). Possible to be a leader in both at the same time (if you have time/energy for it). One of them emphasizes the “effective” in “effective altruists”, the other the “altruists”. The first more like current EA, the second more focused on increasing the (lasting) altruism of the greatest number of people. Human resource focused.
Just about anyone could contribute to the second one, I would think. It could be a pool of people from which to recruit for the first one, and both movements would share ideas and culture (to an appropriate degree).
“King Emeric’s gift has thus played an important role in enabling us to live the monastic life, and it is a fitting sign of gratitude that we have been offering the Holy Sacrifice for him annually for the past 815 years.”
(source: https://sancrucensis.wordpress.com/2019/07/10/king-emeric-of-hungary/ )
It seems to me like longtermists could learn something from people like this. (Maintaining a point of view for 800 years, both keeping the values aligned enough to do this and being around to be able to.)
(Also a short blog post by me occasioned by these monks about “being orthogonal to history” https://formulalessness.blogspot.com/2019/07/orthogonal-to-history.html )
I may not have understood all of you what you said, but I was left with a few thoughts after finishing this.
1. Creating Bob to have values: if Bob is created to be able to understand that he was created to have values, and to be able to then, himself, reject those values and choose his own, then I say he is probably more free than if he wasn’t. But, having chosen his own values, he now has to live in society, a society possibly largely determined by an AI. If society is out of tune with him, he will have limited ability to live out his values, and the cognitive dissonance of not being able to live out his values will wear away at his ability to hold his freely-chosen values. But society has to be a certain way, and it might not be compatible with whatever Bob comes up with (unless maybe each person lives in a simulation that is their society, that can be engineered to agree with them).
Other than the engineered-solipsism option, it seems like it’s unavoidable to limit freedom to some extent. (Or maybe even then: what if people can understand that they are in engineered-solipsism and rebel?) But we could design a government (a world-ruling AI) that fails to decide for other people as much as possible and actively fosters people’s ability to make their own decisions, to minimize this. At least, a concern one might have about AI alignment is that AI will consume decision-making opportunities in an unprecedented way, leading one to try to prevent that from happening, or even reduce the level of decision-making hoarding that currently exists.
2. Brainwashing: If I make art, that’s a bit of brainwashing (in a sense). But then, someone else can make art, and people can just ignore my art, or their art. It’s more a case of there being a “fair fight”, than if someone locks me in a room and plays propaganda tapes 24⁄7, or if they just disable the “I can see that I have been programmed and can rebel against that programming” part of my brain. This “fair fight” scenario could maybe be better than it is (like there could be an AI that actively empowers each person to make or ignore art to be able to counteract some brainwashing artist). Our current world has a lot of brainwashing in it, where some people are more psychologically powerful than others.
3. “Hinge of History”ness: we could actively try to defer decisionmaking as much as possible to future generations, giving each generation the ability to make its own decisions and revoke the past as much as possible (if one generation revokes the past, they can’t impede the next from revoking their values, as one limitation on that), and design/align AI that does the same. In other words, actively try to reduce the “hingeyness” of our century.