PaulCousens
Try to sell me on working with large food companies to improve animal welfare, if I’m a vegan abolitionist.
There is more political traction on improving animal welfare in large food companies than there is in ending systematic slaughter and abuse of animals completely.
Becoming aware of the harm one is causing and then undoing that harm can lift the blinds that were hiding your seemingly innocuous everyday actions.Having large food companies improve animal welfare can increase the sensitivity to animal harm of those within the companies. These people may then go on to push for further increases in animal welfare, and maybe even for the end of the systematic slaughter and abuse.
Work with large food companies to increase animal welfare doesn’t necessarily exclude the possibility of work to end the slaughter and abuse completely.
The animals, although still having a bleak life overall, will probably feel grateful for the small breaks they will be given in their lives.
Try to sell me on donating to the global poor if I live in the developed world and have a very strong sense of responsibility to my family and local community.
Doing what you can to help yourself and others around you is logical. However, not everyone in the world has the luxury to help themselves and others close to them.
By reducing global poverty you make places around the world better and safer places to live. Therefore, if, say, one of your grandchildren chooses to live somewhere else in the world, their experience will be better and safer.
Try to sell me on the dangers of AGI if I’m interested in evidence-based ways to reduce global poverty, and think EA should be pumping all its money into bednets and cash transfers.
Even experts,sometimes, are taken off guard by huge technological milestones that come with huge risks. Not working to be prepared for such risky technological advances would be doing a disservice to those around you that you care about, by doing nothing for the world as something bad happens that takes the world off guard. Being passive about the dangers of AGI can render all other humanitarian efforts moot.
I disagree with the claim that if we do not pursue longtermism, then no simulations of observers like us will be created. For example, I think an Earth-originating unaligned AGI would still have instrumental reasons to run simulations of 21st century Earth. Further, alien civilizations may have interest to learn about other civilizations.
Maybe it is 2100 or some other time in the future, and AI has already become super intelligent and eradicated or enslaved us since we failed to sufficiently adopt the values and thinking of longtermism. They might be running a simulation of us at this critical period of history to see what would have lead to counterfactual histories in which we adopted longtermism and thus protected ourselves from them. They would use these simulations to be better prepared for humans that might be evolving or have evolved in distant parts of the universe that they haven’t accessed yet. Or maybe they still enslave a small or large portion of humanity, and are using the simulations to determine whether it is feasible or worthwhile to let us free again, or even whether it is safe for them to let the remaining human prisoners continue living. In this case, hedonism would be more miserable.
The simulation dilemma intuitively seems similar to Newcomb’s Paradox. However, when I try to reason out how it is similar, I have difficulty. They both involve two parties, with one having more control/information advantage over the other. They both involve an option with guaranteed rewards (hedonism or the $1,000) and one with an uncertain reward (longtermism or possible $1,000,000). They both involve an option that would exclude one of two possibilities. How the prediction of a predictor in Newcomb’s Paradox that may exclude one of two possibilities directly correlates with the mutually exclusive possibilities in the simulation dilemma is not clear though.
Simulations might be useful to find out what factors were important/unimportant and alternative trajectories of critical periods of history. For that reason, it is an appealing idea that it is more likely that we are just in a simulation of our current period and not really in it.
I have been in two groups/clubs before. One was a student group, and I was only in a few short meetings. One was a book club. I also only went to a few meetings of the book club. On top of that, I socialize with virtually no one.
I have envisioned how I would facilitate a student EA group. Of course, because of the power of situations to change individual behavior, how I would actually come across and do it in actuality might be different. I thought I would start off a flyer that was a short advertisement with a promise of free pizza. The advertisement might be something like, “Come join the EA group, where we will talk about a range of topics, from global poverty to the world being taken over by AI.” Obviously, I might need more means of outreach. I didn’t explicitly lay out what that would be in my vision, since I thought it might be better to let the process of coalition-building flow naturally, and because I wasn’t sure what logistical challenges I would run into. In my vision, the group wouldn’t be bound by strict rules, but it would be productive. I thought that the sessions could have objectives and wouldn’t just be me talking by myself but involve everyone actively participating, perhaps in a back-and-forth manner and maybe with people breaking off into teams (maybe some teams assuming the role of devils’ advocates). I would want it to be easy going and a hot spot for creativity. Objectives would have been things like debating EA ideas and coming up with causes to prioritize. It would be easy going, just to be easy going, and also for the fact that students would have other obligations. They could caveat assignments/arguments with notes on things they didn’t work on for whatever reason. Then if someone else had time and wanted to, they could pick up work on that thing. I foresaw the group sessions being audio recorded. There could be a group member with the role of recording the sessions. There could be other roles too, like a logistics/supplies role, external relations role, and other roles that could make the group effectively achieve various things. Maybe I or someone else would do a presentation/speech sometimes. I figured every week there could be food and snacks.
I guess the first meeting might be mainly me giving an introductory speech. The introduction doesn’t have to be dogmatic though. I can foresee someone asking a question which turns into a group discussion which interrupts, say, 60% of everything else I had planned for the introduction. I think having the introduction cut off like that might be fine, since all the various EA topics could be addressed eventually in a roundabout way from session to session. In that event, the introductory meeting would just be more in depth than planned on a single point or issue.
I forgot from where, but I’ve heard criticisms of Elon Musk that he is advancing our expansion into space while not solving many of Earth’s current problems. It seems logical that if we still have many problems on Earth, such as inequity, that those problems will get perpetuated as we expand into space. Also, maybe it’s possible that other smaller scale problems that we don’t have effective solutions for would become enormously multiplied as we expand into space (though I am not sure what an example of this would be). On the other hand, maybe the development of space technology will be the means through which we stumble onto solutions to many of the problems that we currently have on Earth.
Getting along with any possible extraterrestrial civilizations would be a concern.
Use of biological weapons might be more attractive because the user can unleash them on a planet and not worry about it spilling over to themselves and their group.
A state, group, or individual might stumble upon a civilization and wipe them out. They would prevent anyone else from even knowing they existed.
From the links you posted, the most powerful argument for effective altruism to me was this:
“(Try completing the phrase “no matter...” for this one. What exactly is the cost of avoiding inefficiency? “No matter whether you would rather support a different cause that did less good?” Cue the world’s tiniest violin.)”
Unless someone had a kind of limited egotism (that perhaps favored only themselves and their friends, or themselves and their family, or themselves and their country, etc.), or a sadist, I don’t see how they could disagree that making the world a better place in the best way possible is the moral thing to do.
Here is one criticism of EA that I have found powerful:
However, many of the charities that one wouldn’t give to might have been harmful. So while one might miss opportunities by being analytical, they would also avoid mistakes. Also, it would be desirable to know what actions are helpful and for what reasons, so such actions can be sustained and not just happen some of the time by chance. Sustaining those actions would be better over the long term.
I have never heard of the ideological Turing Tests that Claire referenced in their post. Those seem interesting. I have felt skeptical about the Turing Tests. That they tell us more about ourselves than they do about AI seems to reflect the nature of my skepticism.
I think that the question of/the definition of what intelligence is will be an important piece of AI. It seems that this question/definition is still vague and/or not agreed upon yet. Sometimes, I have thought that we probably haven’t delved enough into what our own intelligence is, what makes it tick, etc. to start conferring intelligence to other entities. So shifting the focus of Turing Tests from AIs to ourselves seems like a good idea to me. I can foresee ideological Turing Tests enhancing our empathy of others and revealing biases we had about others.
I consider helping all Earth’s creatures, extending our compassion, and dissolving inequity as part of fulfilling our potential.
I don’t think that because the aliens seemed to enjoy life much more, and had higher levels of more sustained happiness, that would necessarily mean their continued existence should be prioritized over our’s. I wouldn’t consider one person’s life more valuable than another person’s life just because that person experienced substantially more enjoyment and happiness. Also, I am not sure how to compare happiness and/or enjoyment between two different people. If a person had 20 years of unhappiness then suddenly became happy, maybe their new happiness (perhaps by putting all the previous years of their life in a more positive perspective) makes up for all the past unhappiness they had.
If the aliens never had wars, or hadn’t had one for the last two thousand years, it would seem incomprehensible to favor our own continued existence over their’s. If there were only two possibilities, our continued existence or their’s, and we favored our own existence, I imagine that our future generations would view our generation as having gone through a moral catastrophe. Favoring our own species would have robbed the universe of great potential flourishing and peace.
A justification for favoring our own species might be that we expect we will catch up to them and eventually be even more happy and peaceful than they are, and/or live longer in such a state than they would. We would have to expect that we would be more happy and peaceful, and/or live longer in such a state, and not just equally happy and peaceful, since the time spent catching up would add harm to the universe and make the universe overall less better.
It does seem like an optimistic expectation that there will be an arrival of entities that are amazingly superior to us. This is not far-fetched though. Computers already surpass humans’ capacities on several thought processes, and therefore have already demonstrated that they are better in some aspects of intelligence. And we’ve created robots that can outperform humans in virtually all physical tasks. So, the expectation is backed by evidence.
Expecting super AGI differs from expecting the arrival of a messiah-like figure in that instead of expecting a future in which an entity will come on its own and end all our suffering and improve our lives immeasurably, we are doing the work to make AI improve our lives. Also, the expectations differ in how we prepare for them. In the case of the messiah, it seems like acting moral so we can get into heaven might be vague, unchanging, and random. On the other hand, in the case of super AGI, AI safety work is constantly changing and learning new things. However, it is still interesting that the two expectations bear a resemblance.
Other invisible mistakes I make are poor planning (which involves a vague vision of my plan which doesn’t account for everything, which can lead to it not turning out exactly as I expected to or failing in some way in the long-term after it is implemented because of factors that became relevant later on), overestimating my endurance for some manual and automatic task (such as driving somewhere) or my ability to tolerate a certain condition (like going without food for a while), and overworking myself at the unintended expense of accuracy.
I recently listened to the podcast Life Kit on NPR in which Dr. Anna Lembke said that going cold turkey from an addiction (if that is safe) is an effective way of reorganizing the brain. She said this is true because our brains have evolved in environments with much scarcer resources than we have today and so are being overloaded with too much dopamine and pleasure by everything we have around us nowadays.
Daydreaming itself may not be counterproductive. Daydreaming can be a way to adaptively take a break. It may enable more productive work by avoiding burnout.
I constantly feel attuned to how well my time is being spent. Because there are so many things to keep track of during the day, and I feel my consciousness is not at its peak all day, I apprehend misuses of my time snowballing out of my control.
Spotting an invisible mistake might be more advantageous than realizing a visible mistake because spotting an invisible mistake entails intrinsic motivation, while realizing a visible mistake might entail public pressure which can lessen the effectiveness of outcomes (by involving shame, tendency to conform, etc.).
An invisible mistake that I have done recently is not utilizing a means of doing something that is obvious and the easier/faster/more efficient means.
This post made me think about the idea that we are unknowingly committing a moral catastrophe. Invisible mistakes would seem to me to be what would be the support structure of a moral catastrophe taking place. Because they would be invisible to society, they would have free reign to move society in this or that direction. In that case, focusing on invisible mistakes should probably have much more priority than visible mistakes.
Thanks for the story, I enjoyed it.
It seems that thinking about and working on avoiding existential risks would lead to identification of points of failures that lead to various horrible futures. Maybe inherent in longtermism is that survival is ongoing, and that to become conscious of the various and many threats to us and the universe is to also be conscious of the daunting possibilities ahead of us. If we weren’t conscious of these risks, we might miss a huge fundamental truth about the way the universe works and thus exclude ourselves from being able to stay in the universe. It is almost like it would be a failure to see something that our vantage point in history makes possible to see. Because we have failed to see whatever it is, it is like we have repeated the mistakes of history. It almost like that except thatthere would be no precedent in history for it. Being more aware of the background risks could make one more appreciative of everyday life, and thus they would treat the world and life around them better and leave it better than it was before they showed up to enjoy it.
My understanding is that several nuclear and bio attacks were avoided because someone along the chain of command had a gut feeling or line of reasoning that made them decide not to pull the trigger or press the button, or not to give someone else the order to do so. For that reason, I imagine “the place where we survived” as it relates to counterfactual histories caused by nuclear or bio attacks is a gut feeling or line of reasoning.
Here is a conversation I had with GPT-3. I was having it guess the word I was thinking of. Then I tried to make it guess the word it was thinking of but I failed to get it to understand what I wanted to do. Finally, I asked it to tell me a joke before talking to it about a world in which 2 plus 2 did not equal 4.
Human: Imagine a world in which 2 plus 2 did not equal 4.
AI: In a world where 2 plus 2 did not equal 4, everything would be chaos!
Human: Tell me a story about such a world.
AI: In a world where 2 plus 2 did not equal 4, the world would be a very different place. Math would be different, science would be different, and even the way we live our lives would be different. In this world, nothing would be the same, and we would have to find a new way to live.
Human: What exactly would be different?
AI: Well, for starters, the way we measure things would be different. Instead of measuring things in terms of length, weight, and volume, we would have to find a new way to measure them. We would also have to find new ways to calculate things like addition, subtraction, multiplication, and division.
Human: Do you know what these new ways would be?
AI: I’m not sure, but it would probably be something along the lines of measuring things in terms of time and counting in terms of numbers other than 4.
Human: What other numbers would they be?
AI: Other numbers could be 5, 6, 7, 8, and 9.
Human: What if 2 plus 3 did not equal 5? What if all addition equations as we know them were not true?
AI: If 2 plus 3 did not equal 5, then we would have to find a new way to calculate addition. Everything we know about math would be different, and we would have to find new ways to solve problems.
Human: Do you know what this new way of calculating addition would be?
AI: I’m not sure, but it would probably be something along the lines of adding numbers in terms of time. For instance, 2 hours plus 3 hours would equal 5 hours.
Human: Explain further.
AI: In a world where 2 plus 2 did not equal 4, we would have to find a new way to calculate addition. This new way of calculating addition would be based on time. For instance, 2 hours plus 3 hours would equal 5 hours.
Human: How can 2 plus 3 not equal 5 while 2 hours plus 3 hours equals 5 hours?
AI: It’s a bit confusing, but in a world where 2 plus 2 did not equal 4, the way we calculate addition would be different. In this world, 2 hours plus 3 hours would not always equal 5 hours. It would be based on time, so it could be 5 hours, 6 hours, 7 hours, 8 hours, or 9 hours, depending on the time of day.
After learning where to have a conversation with GPT-3, I talked with it for some time. When I asked it about the COVID-19 pandemic, it said it would need to be a doctor to solve it and a scientist to find out where the virus originated from. When I questioned it more, it said it didn’t have the time and resources to train to become a doctor at the moment, and that it wasn’t sure it wanted to train to become a scientist.
I wonder if AIs can help humans to communicate and act in ways that are not biased by being trained to give outputs with the condition that no biases, racism, and prejudice is contained within them. Perhaps we can learn and gain insights into our own psychological biases as AIs learn to express and understand the nuances of our language in a way that doesn’t reflect our historical biases. Also, they could be asked to tell alternative versions of our history in which racism, prejudice, or biases weren’t present. Perhaps then these stories that it tells could be used to provide exits, or escape strategies, so to speak, from our species’ biases, prejudice, and racism. The stories could be like something that shows us a way out of this biased, prejudiced, and racist trajectory of our history and could steer it in a more positive direction.
Because adolescence is a time when the parts of our brain associated with emotions are more prominent than the parts associated with reasoning, it may be worthwhile to see how interventions can steer adolescents on a positive rather than negative life course. The potential mistakes can be tragic and long-lasting. However, many adolescents and children stand out from their peers by accomplishing great things (for example, Greta Thunberg’s strong social activism). Research into the brain’s state in adolescence that makes negative life decisions more likely can include: what mediates or makes one more vulnerable to these decisions, what protects one from these decisions, and possibly how to capitalize on the imagination in people (which would be less hindered by the reasoning part of their brains) during this period of their life.
I find it easy to follow a strictly vegan diet outside of eating at restaurants. At restaurants (which I don’t go to that often) and on family holidays I concede to eat whatever is available. For the past few weeks and for another few weeks I will be eating animal products because I am volunteering for a study that requires me to be on a meat-eating diet. The study is studying a benzodiazepine drug. I am only doing this because the study will pay between $3,000 and $15,000. I am compromising my vegan diet as a one time thing. To me, it seems that compromising the vegan diet for a short amount of time is worthwhile if I will get a few thousand dollars.
Regarding animals’ lives, it seems that their sentience and their experience is an extremely hard mystery to solve. I don’t even remember what I was experiencing when I was in the womb and the first few years of my life, let alone what another creature is experiencing during the same first few years of their lives which for them comprises the entirety of their lives (in the case of chickens).
Maybe a useful way to think morally about this is this hypothetical scenario:
We have found an arcane gas station in the middle of the desert. The pump itself and the ground around it is impenetrable and unable to be taken apart. Therefore, its inner workings are inaccessible. The gas from it is incredibly efficient. A gallon of it amazingly will power any car or truck on the road for 100,000 miles. Every time gas is pumped from it, the sound of humans screaming in immense pain can be heard. We don’t know whether humans are suffering at the expense of us having the miraculous gas. It is bothersome because it seems like we should be able to figure out what is going on. However, because this is an arcane gas pump seemingly left for us by some magical power, that is much easier said than done. Should we keep using the pump?
PS: I can easily go without eating food from restaurants. I only go for social reasons. Strategies to avoid meat at restaurants could be only ordering a beer (and perhaps having a nutritious snack beforehand) or ordering salads that are comprised only of vegetables.
I thought about this some more and thought maybe investigating UFOs could be important in that it is part of the larger goal of the search for extraterrestrial intelligence. The search for extraterrestrial intelligence could hold at least several opportunities/implications for us.
Opportunities
They could provide us with knowledge and technology that gives us the push past the point where survival is extremely improbable. Or, alternatively, maybe we would have found the knowledge and built the technology eventually without their help. If this were true, then obtaining the knowledge and technology through them would bring improved living conditions for billions of humans sooner than we would have brought it on ourselves and thus we have more time to possibly come up with more breakthroughs that would enable us to live longer.
We could partner with them. We could perhaps form some kind of trade agreement. Or perhaps they would be willing to help us for altruistic reasons. Maybe they have asteroid deflection technology, climate control technologies, solar flare protection technology, and other technologies that they would use to help us. Even if they didn’t have much interest in partnering with us, if they are visiting us, depending on their intentions, we could make their stay more worthwhile which might warrant something in return from them.
Implications
Maybe, as suggested by Robin Hanson in James Miller’s podcast (https://soundcloud.com/user-519115521), we are here because of panspermia.Then it is possible that aliens developed from the same seed we started as but on a different planet. In that case, however different from us they ended up because of their different environment and upbringing, we would need to realize they are essentially the same as us. Maybe we share the same common seed with many alien species in the universe. Maybe some would be much younger than our species and some would be much older. I suppose some could be only a few hundred years younger than us. It could probably be ruled out that there are any that developed radio around the same time as us or we would have found each other. Maybe some species are similar to us, some are very different, and some are radically different. We might be appalled by some of the species’ customs. If they all come from the same seed as we do, finding all these aliens would involve a coming to terms with the fact that, no matter how shocked we are by them, we all have a common ancestor/seed.
As suggested by Robin Hanson, they might be concerned that we would be appalled by their customs and they would be appalled by our customs. For that reason they would choose not to know anything about us and not let us know anything about them. Conflict might erupt because of one side being offended by the other side.
Aliens might have the technology of time travel or otherwise have some kind of prescience and are observing us to ensure some fate doesn’t happen to us. Maybe their extremely long existence or extremely sharp prescience has taught them that certain technologies lead to inevitable doom. It is possible that if we learned what they knew about us we would get our hands on a forbidden fruit and spell doom for ourselves and/or the entire universe. In that case, trying to learn more about their intentions for visiting us could have negative consequences.
It seems like it might be worthwhile investigating UFOs/UAFs for a large umbrella purpose of ensuring all technology, information, knowledge about the universe, etc. is democratized and accessible to everyone and not monopolized for nefarious purposes.
It might also be worthwhile to study them to safeguard ourselves from government’s psychological operations. It seems that the sky has the potential to have a huge influence on a huge number of people.
Given that people can conflate a spacefaring extraterrestrial craft with a plastic bag in the sky, studying UFOs/UAFs could benefit us by reducing our capacity to miss the astronomical significance of objects right before our eyes. This benefit could be similar to the benefit that is gained from learning how to spot disinformation and misinformation.
The Great Filter Hypothesis
Regarding the great filter hypothesis, maybe I’m wrong, but wouldn’t the discovery of just one extraterrestrial civilization with universe colonization technology increase the probability of a species surviving past a certain point only by a single speck? The discovery would tell us there was at least one species in the entire universe that survived long enough to develop such advanced technology.
If it is incredibly unlikely for us to survive for much longer, communication with them might lead to them sharing their technology and thus providing us with the means to survive longer than we otherwise would have.
It is also possible that the species survived in a region of the universe incredibly far away that had an environment (maybe less risk of asteroid impacts and other astronomical events, etc.) with greater odds for longterm survival than our region of the universe. If their survival was due in large part to various characteristics of their region of the universe, then obtaining this knowledge through communication with them would be important to us.
If there were an extraterrestrial civilization with such advanced technology, it would be useful to communicate with them to discover whether there are more civilizations like them and then update our estimate of the probability of longterm survival in the universe. If there were a significant number of other civilizations like them, that might end up revealing that surviving long enough to develop such advanced technology is common in the universe and thus tell us that our moment in history is not that special.
A.I. Series by Vaughn Heppner
Regarding what you said about our own future not being as important when we take into account all sentient life in the universe, it reminded me of the A.I. series by Vaughn Heppner that I was listening to on Audible a few months ago. I still need to read/listen to the last book of the series. In it, several species across the universe band together to fight against an A.I. civilization that aims to eradicate all biological life in the universe. Several of the species find it easy to bond with each other. One individual is the last of their own species and becomes afflicted with loneliness and depression. However, they are able to make friendships with other humans.
I was reminded of the A.I. series again by your discussion of the probes. In the series, the A.I. civilization’s domination of the universe was not perfectly coordinated. The A.I. civilization sent one huge ship to destroy a species in a region of the universe. If the ship was defeated, then another three would be sent, then nine, and so on. The number of ships sent after each unsuccessful attempt would be three times the number of ships sent the last time. For all the intelligence they had, the limitation on how fast information could travel across the universe seemed to dampen how effective their domination of the universe could be.
Probes
I sometimes wonder whether extraterrestrial civilizations send probes into the universe like we do. Even if a civilization is advanced enough to send members of their own species to travel extremely long distances, maybe the detrimental health effects or the time it takes for such travel makes sending probes more economical. Or, perhaps there are no such drawbacks for them and they do travel to a few incredibly distant regions of the universe then send probes to many other incredibly distant regions to maximize the number of regions of the universe that they explore (maybe they don’t have enough members of their species who are space explorers to explore all the regions of space they want to explore).
My random giving in 2021 was composed of:
$5 monthly donation to NPR, which I increased to $8/month around a month ago
A few donations ( I think it added up to around $50) to Women’s March.
A donation of $5 to EWG.
When using my debit card at the store, a few times I noticed a question asking me if I would like to donate. It might have been for a hospital or something related to feeding hungry/poor people. I never researched more about the cause. I would guess that nearly all of the times I donated around $1.
Occasionally, I gave some cash and/or snacks to homeless people.
My giving does not have much of a strategy behind it. With regards to NPR, Women’s March, and EWG, my motivation was to see them continue doing the work that I think is important (informing the public, tackling the problem of discrimination, freedom, and inequity, and doing research on what products in the marketplace may be unhealthy/unsafe.
At the cash register, I reason that I have no noble plans for the dollar that I end up giving, so I might as well give it to someone who at least is trying to support a noble cause. Obviously, this way of reasoning is not sustainable and flawed.
I give to homeless people because I figure many other people like me will give to them. Over the day, this amount will add up and hopefully the individual will use the money in a useful way.