Is it possible to create a postsuffering future where involuntary suffering no longer exists?
I’m in the International Suffering Abolitionists group and began the Wikipedia article eradication of suffering.
Is it possible to create a postsuffering future where involuntary suffering no longer exists?
I’m in the International Suffering Abolitionists group and began the Wikipedia article eradication of suffering.
Many thanks for writing this essay. The history of technological restraint is fascinating. I never knew that Edward Teller wanted to design a 10-gigaton bomb.
Something I have noticed in history is that advocates of technological restraint are often labelled luddites or luddite supporters. Here’s an example from 2016:
Artificial Intelligence Alarmists Win ITIF’s Annual Luddite Award
After a month-long public vote, the Information Technology and Innovation Foundation (ITIF) today announced that it has given its annual Luddite Award to a loose coalition of scientists and luminaries who stirred fear and hysteria in 2015 by raising alarms that artificial intelligence (AI) could spell doom for humanity. ITIF argued that such alarmism distracts attention from the enormous benefits that AI can offer society—and, worse, that unnecessary panic could forestall progress on AI by discouraging more robust public and private investment.
“It is deeply unfortunate that luminaries such as Elon Musk and Stephen Hawking have contributed to feverish hand-wringing about a looming artificial intelligence apocalypse,” said ITIF President Robert D. Atkinson. “Do we think either of them personally are Luddites? No, of course not. They are pioneers of science and technology. But they and others have done a disservice to the public—and have unquestionably given aid and comfort to an increasingly pervasive neo-Luddite impulse in society today—by demonizing AI in the popular imagination.”
“If we want to continue increasing productivity, creating jobs, and increasing wages, then we should be accelerating AI development, not raising fears about its destructive potential,” Atkinson said. “Raising sci-fi doomsday scenarios is unhelpful, because it spooks policymakers and the public, which is likely to erode support for more research, development, and adoption. The obvious irony here is that it is hard to think of anyone who invests as much as Elon Musk himself does to advance AI research, including research to ensure that AI is safe. But when he makes inflammatory comments about ‘summoning the demon,’ it takes us two steps back.”
The list of people the ITIF wanted to call luddites included “Advocates seeking a ban on ‘killer robots’”, probably the Campaign to Stop Killer Robots.
I wonder what the ITIF’s position is on Teller’s 10-gigaton bomb.
Thanks for your post. There’s a reasonable case for GMOs and malaria to be a cause area. Target Malaria is using genetic modification to reduce the population of malaria-transmitting mosquitoes.
Open Philanthropy writes, “It seems likely to us that the cost-effectiveness of this grant will be competitive with donations to the Against Malaria Foundation (though unlikely that it will be more than 10 times as cost-effective)” (Open Philanthropy, 2017).
Ray Dalio also has a TisBest $50 charity gift card page: https://www.tisbest.org/rg/ray/
The Tech Worker Handbook website has more information about Non-Disclosure Agreements (NDAs). It also cautions people from reading the website on a company device:
I do NOT advise accessing this from a company device. Your employer can, and will likely, track visits to a resource like this Handbook.
Business Insider’s review of 36 NDAs in the tech industry:
Some NDAs say explicitly that the confidentiality provisions never sunset, effectively making them lifelong agreements...
More than two-thirds of workers who shared their agreements with Insider said they weren’t exactly sure what the documents prevented them from saying—or whether even sharing them was a violation of the agreement itself.
That would work. Or an information symbol ⓘ (the letter ‘i’ in a circle).
Or a green sprout. Some games have that to indicate new players.
The donor is anonymous.
From the Wired article: “The temporary exhibit is funded until May by an anonymous donor...”
Charity Entrepreneurship has a report called “Welfare Focused Gene Modification” from March 2019 that mentions golden rice and other GMOs, mostly farm animal interventions. The report might be superseded though because it no longer appears on the website.
This is an interesting idea from the report: “A ‘Good Gene Institute’, similar to the Good Food Institute, that is focused on carefully and thoughtfully building public awareness and interest in individuals getting into the science of genetics-based animal issues.”
New article about wild animal suffering, interventions, genome editing and gene drives:
Johannsen, Kyle (2021). Humanitarian Assistance for Wild Animals. The Philosophers’ Magazine 93:33-37. Available on PhilArchive: https://philarchive.org/archive/JOHHAF-5
(Edit: Accidentally posted a duplicate link.)
Aligned with whom? by Anton Korinek and Avital Balwit (2022) has a possible answer. They write that an aligned AI system should have
direct alignment with its operator, and
social alignment with society at large.
Some examples of failures in direct and social alignment are provided in Why we need a new agency to regulate advanced artificial intelligence: Lessons on AI control from the Facebook Files (Korinek, 2021).
We could expand the moral circle further by aligning AI with the interests of both human and non-human animals. Direct, social and sentient alignment?
As you mentioned, these alignments present conflicting interests that need mediation and resolution.