Is it possible to create a postsuffering future where involuntary suffering no longer exists?
I’m in the International Suffering Abolitionists group and began the Wikipedia article eradication of suffering.
Is it possible to create a postsuffering future where involuntary suffering no longer exists?
I’m in the International Suffering Abolitionists group and began the Wikipedia article eradication of suffering.
The donor is anonymous.
From the Wired article: “The temporary exhibit is funded until May by an anonymous donor...”
Thanks for all the comments.
Updated the post with a recent tweet from Sam Altman, CEO of OpenAI:
“recalibrate” means “increase” obviously.
disappointing to see this six-week development. openai will continually decrease the level of risk we are comfortable taking with new models as they get more powerful, not the other way around.
John Culver’s How We Would Know When China Is Preparing to Invade Taiwan is also worth reading.
China’s political strategy for unification has always had a military component, as well as economic, informational, legal, and diplomatic components. Most U.S. analysis frames China’s options as a binary of peace or war and ignores these other elements. At the same time, many in Washington believe that if Beijing resorts to the use of force, the only military option it would consider is invasion. This is a dangerous oversimplification. China has many options to increase pressure on Taiwan, including military options short of invasion—limited campaigns to seize Taiwan-held islands just off China’s coast, blockades of Taiwan’s ports, and economic quarantines to choke off the island’s trade. Lesser options probably could not compel Taiwan’s capitulation but could further isolate it economically and politically in an effort to raise pressure on the government in Taipei and induce it to enter into political negotiations on terms amenable to Beijing.
An all-out invasion would be detected months in advance:
Any invasion of Taiwan will not be secret for months prior to Beijing’s initiation of hostilities. It would be a national, all-of-regime undertaking for a war potentially lasting years.
You’re welcome! Thanks for donating
Ray Dalio also has a TisBest $50 charity gift card page: https://www.tisbest.org/rg/ray/
This describes three utopias. It makes sense to have several since everyone has differing definitions of utopia.
The ‘Psychonauts’ sound like the Hedonistic Imperative version of utopia:
The Psychonauts had formed the second most popular cluster. They endorsed hedonism as a theory of value, believing that the purpose of life is the elimination of suffering and the enjoyment of bliss.
Hedonistic Imperative—David Pearce. Eradicating suffering through biotechnology and paradise engineering.
Toby Ord has written about the affectable universe, the portion of the universe that “humanity might be able to travel to or affect in any other way.”
I’m curious whether anyone has written about the affectable universe in terms of time.
We can only affect events in the present and the future
Events are always moving from the present (affectable) to the past (unaffectable)
We should intervene in present events (e.g. reduce suffering) before these events move to the unaffectable universe
Thanks for your post, great advice.
Please ensure you include the book’s title, author, and year/edition, as well as any other information requested by the library. If you’re a university group organiser, it’s likely helpful to note that you’re with a university student group.
Maybe include the ISBN as well. For academic libraries, it’s also helpful to say which students the book is relevant for. Peter Singer’s books would be relevant for the Arts students studying philosophy, for example. Academic libraries can buy some extracurricular resources, but most of the budget is for course-relevant resources.
It’s important to actually use the books after they arrive! Libraries will look at metrics like the number of times a book is borrowed, the number of unique borrowers, date it was last borrowed, etc. Books that don’t get used will eventually be weeded out of the collection. Books that are borrowed a lot may justify multiple copies.
For community building, there’s the International Suffering Abolitionists group. It hosts meetups, a Discord server and a section of EA Gather Town.
“Invincible Wellbeing is a research organisation whose mission is to promote research targeting the biological substrates of suffering.”
Appears on the 80,000 Hours Job Board
(Edit: Accidentally posted a duplicate link.)
Aligned with whom? by Anton Korinek and Avital Balwit (2022) has a possible answer. They write that an aligned AI system should have
direct alignment with its operator, and
social alignment with society at large.
Some examples of failures in direct and social alignment are provided in Why we need a new agency to regulate advanced artificial intelligence: Lessons on AI control from the Facebook Files (Korinek, 2021).
We could expand the moral circle further by aligning AI with the interests of both human and non-human animals. Direct, social and sentient alignment?
As you mentioned, these alignments present conflicting interests that need mediation and resolution.
Good to see more ideas on new charities.
Could you provide more details on this example idea:
Charity Entrepreneurship produced a report on welfare focused gene modification back in 2019. Has there been a change of mind since then?