Is it possible to create a postsuffering future where involuntary suffering no longer exists?
I’m in the International Suffering Abolitionists group and began the Wikipedia article eradication of suffering.
Is it possible to create a postsuffering future where involuntary suffering no longer exists?
I’m in the International Suffering Abolitionists group and began the Wikipedia article eradication of suffering.
Many thanks for writing this essay. The history of technological restraint is fascinating. I never knew that Edward Teller wanted to design a 10-gigaton bomb.
Something I have noticed in history is that advocates of technological restraint are often labelled luddites or luddite supporters. Here’s an example from 2016:
Artificial Intelligence Alarmists Win ITIF’s Annual Luddite Award
After a month-long public vote, the Information Technology and Innovation Foundation (ITIF) today announced that it has given its annual Luddite Award to a loose coalition of scientists and luminaries who stirred fear and hysteria in 2015 by raising alarms that artificial intelligence (AI) could spell doom for humanity. ITIF argued that such alarmism distracts attention from the enormous benefits that AI can offer society—and, worse, that unnecessary panic could forestall progress on AI by discouraging more robust public and private investment.
“It is deeply unfortunate that luminaries such as Elon Musk and Stephen Hawking have contributed to feverish hand-wringing about a looming artificial intelligence apocalypse,” said ITIF President Robert D. Atkinson. “Do we think either of them personally are Luddites? No, of course not. They are pioneers of science and technology. But they and others have done a disservice to the public—and have unquestionably given aid and comfort to an increasingly pervasive neo-Luddite impulse in society today—by demonizing AI in the popular imagination.”
“If we want to continue increasing productivity, creating jobs, and increasing wages, then we should be accelerating AI development, not raising fears about its destructive potential,” Atkinson said. “Raising sci-fi doomsday scenarios is unhelpful, because it spooks policymakers and the public, which is likely to erode support for more research, development, and adoption. The obvious irony here is that it is hard to think of anyone who invests as much as Elon Musk himself does to advance AI research, including research to ensure that AI is safe. But when he makes inflammatory comments about ‘summoning the demon,’ it takes us two steps back.”
The list of people the ITIF wanted to call luddites included “Advocates seeking a ban on ‘killer robots’”, probably the Campaign to Stop Killer Robots.
I wonder what the ITIF’s position is on Teller’s 10-gigaton bomb.
Thanks for your post. There’s a reasonable case for GMOs and malaria to be a cause area. Target Malaria is using genetic modification to reduce the population of malaria-transmitting mosquitoes.
Open Philanthropy writes, “It seems likely to us that the cost-effectiveness of this grant will be competitive with donations to the Against Malaria Foundation (though unlikely that it will be more than 10 times as cost-effective)” (Open Philanthropy, 2017).
Ray Dalio also has a TisBest $50 charity gift card page: https://www.tisbest.org/rg/ray/
The Tech Worker Handbook website has more information about Non-Disclosure Agreements (NDAs). It also cautions people from reading the website on a company device:
I do NOT advise accessing this from a company device. Your employer can, and will likely, track visits to a resource like this Handbook.
Business Insider’s review of 36 NDAs in the tech industry:
Some NDAs say explicitly that the confidentiality provisions never sunset, effectively making them lifelong agreements...
More than two-thirds of workers who shared their agreements with Insider said they weren’t exactly sure what the documents prevented them from saying—or whether even sharing them was a violation of the agreement itself.
That would work. Or an information symbol ⓘ (the letter ‘i’ in a circle).
Or a green sprout. Some games have that to indicate new players.
The donor is anonymous.
From the Wired article: “The temporary exhibit is funded until May by an anonymous donor...”
Charity Entrepreneurship has a report called “Welfare Focused Gene Modification” from March 2019 that mentions golden rice and other GMOs, mostly farm animal interventions. The report might be superseded though because it no longer appears on the website.
This is an interesting idea from the report: “A ‘Good Gene Institute’, similar to the Good Food Institute, that is focused on carefully and thoughtfully building public awareness and interest in individuals getting into the science of genetics-based animal issues.”
New article about wild animal suffering, interventions, genome editing and gene drives:
Johannsen, Kyle (2021). Humanitarian Assistance for Wild Animals. The Philosophers’ Magazine 93:33-37. Available on PhilArchive: https://philarchive.org/archive/JOHHAF-5
Thanks for your questions. Here are some thoughts:
[signalling or alarm system] would be a functional replacement, performing the same function as pain, but replacing suffering with information.
Is this something like rationality? Some individuals can learn by rational rather than emotional understanding. How can an individual’s reasoning potential be known?
I think rationality would apply to both cases. Let’s say you feel pain in your arm, you or your doctor would use rational methods to figure out what’s wrong. The same thing would happen if a diagnostic tool or a gene-edited system notified you, without the pain signal, that there’s something wrong with your arm. You would still use rationality to diagnose and fix the problem.
This would mean that suffering-reducing measures should be universal or could cause unintended suffering to non-participants.
I agree with you that suffering reduction should be universal. Effective altruism has really pushed the idea of overcoming bias in location, time and species.
It is implied that developing competence and survival is enjoyable, and more enjoyable than (painfully) dying very young. Is there any evidence for either of those claims?
The second chapter of the book focuses on r-strategists, but also states that “r-strategist infants aren’t the only wild animals who experience a low level of welfare. Most (sentient) K-strategist animals and r-strategist adults endure a considerable amount of suffering from a variety of sources...”
completely eliminating suffering would decrease an animal’s capacity for positive experiences
What suggests that this is the case? A counter-example is that taking an analgesic does not eliminate one’s ability to feel pleasure.
I agree. Joanne Cameron is also a good example of someone who doesn’t feel pain and appears to have a normal capacity for positive experiences and happiness. The effects of eliminating pain or suffering on happiness is worth further study.
Thanks for all the comments.
Updated the post with a recent tweet from Sam Altman, CEO of OpenAI:
“recalibrate” means “increase” obviously.
disappointing to see this six-week development. openai will continually decrease the level of risk we are comfortable taking with new models as they get more powerful, not the other way around.
Thanks for creating this comprehensive list!
For the wild animal suffering section, there’s a book by Kyle Johannsen that covers the ethics of intervention:
Good idea, but one issue with donating books to a library is that the librarian still has to decide whether to accept or reject the donation. Most librarians are very selective about what gets included and what gets weeded out of their collection.
Another option is to use the library website and find the “Suggest items for the library” web form. (Search the library catalogue first to see whether the library already holds the item.) If the librarian decides to purchase the book, it is completely funded by the library budget.
You can suggest the format too: print, ebook or both. I would say both because both print and ebook formats have their respective strengths and limitations.
For university libraries, if you mention the course or unit (e.g. ethics, philosophy) that would benefit from the book, it helps the librarian to justify the purchase.
Good to see more ideas on new charities.
Could you provide more details on this example idea:
Banning harmful practices (like genetic modification)
Charity Entrepreneurship produced a report on welfare focused gene modification back in 2019. Has there been a change of mind since then?
Lack of access to the incorporated standards, since the standards often cost hundreds of dollars each to access.
Not only are many standards expensive, but they often include digital rights management that make them cumbersome to access and open.
In Australia, access to standards is controlled by private companies that can charge whatever they like. There’s currently a petition to the Australian parliament with 22,526 signatures requesting free or affordable access to Australian Standards, including standards mandated by legislation. Across the ditch, the New Zealand government has set a great example by funding free access to building standards.
It’s important for AI safety standards to be open access from the start.
Thanks for your post! Good to see this issue in the EA Forum.
Regarding the statement that:
At this point, most people in Taiwan don’t consider themselves Chinese anymore and simply want to be their own nation instead, indefinitely.
Survey data supports your first point. The vast majority of people in Taiwan call themselves “Taiwanese” or “Both Taiwanese and Chinese”:
Survey data doesn’t support your second point though: “[most people in Taiwan] simply want to be their own nation instead, indefinitely”. Most people in Taiwan support the status quo in various forms:
The most popular options are:
Maintain status quo, decide at later date (28.4%)
Maintain status quo indefinitely (27.3%)
Maintain status quo, move toward independence (25.1%)
The survey question doesn’t define what the status quo is, but it’s definitely not independence, and it’s definitely not unification. It’s the grey area, the middle choice, between independence and unification.
The US uses strategic ambiguity to keep Taiwan with the status quo. It will support Taiwan as long as it doesn’t declare formal independence and start a war.
Why is the status quo so popular? It means peace and prosperity, and it has been surprisingly stable over the last 70 years.
No worries, thanks for renaming it. I have added a short lead section.
To add to arguments for inclusion, here’s an excerpt from an EA Forum post about key figures in the animal suffering focus area.
“Major inspirations for those in this focus area include Peter Singer, David Pearce, and Brian Tomasik.”
Four focus areas of effective altruism by Luke_Muehlhauser, 8th Jul 2013
David Pearce’s work on suffering and biotechnology would be more relevant now than in 2013 due to developments in genome editing and gene drives.
(Sorry, I didn’t see your comment until now.)
Animal Ethics has some bibliographical lists: https://www.animal-ethics.org/bibliographical-lists/
Kyle Johannsen’s book Wild Animal Ethics has extensive reference lists https://philpapers.org/rec/JOHWAE-2
Great feature! Just wondering whether Our World in Data charts can be embedded into Substack and Ghost in a similar way.
(Edit: Accidentally posted a duplicate link.)
Aligned with whom? by Anton Korinek and Avital Balwit (2022) has a possible answer. They write that an aligned AI system should have
direct alignment with its operator, and
social alignment with society at large.
Some examples of failures in direct and social alignment are provided in Why we need a new agency to regulate advanced artificial intelligence: Lessons on AI control from the Facebook Files (Korinek, 2021).
We could expand the moral circle further by aligning AI with the interests of both human and non-human animals. Direct, social and sentient alignment?
As you mentioned, these alignments present conflicting interests that need mediation and resolution.