PhD Student in Statistics @UCBerkeley || AI Safety, Animal Welfare
seanrson
Would you/your parents be open to purchasing more plant protein sources? This could be foods like beans, soy milk, or something like a pea protein powder.
Here’s some more examples: https://www.chhs.colostate.edu/krnc/monthly-blog/plant-based-protein-a-simple-guide-to-getting-enough/
I think the objection comes from the seeming asymmetry between over-attributing and under-attributing consciousness. It’s fine to discuss our independent impressions about some topic, but when one’s view is a minority position and the consequences of false beliefs are high, isn’t there some obligation of epistemic humility?
Maybe the examples are ambiguous but they don’t seem cherrypicked to me. Aren’t these some of the topics Yudskowky is most known for discussing? It seems to me that the cherrypicking criticism would apply to opinions about, I don’t know, monetary policy, not issues central to AI and cognitive science.
Hey Jack! In support of your view, I think you’d like some of Magnus Vinding’s writings on the topic. Like you, he expresses some skepticism about focusing on narrower long-term interventions like AI safety research (vs. broader interventions like improved institutions).
Against your view, you could check out these two (i, ii) articles from CLR.
Feel free to message me if you’d like more resources. I’d love to chat further :)
How about Melanie Mitchell’s Artificial Intelligence: A Guide for Thinking Humans?
Oh totally (and you probably know much more about this than me). I guess the key thing I’m challenging is the idea that there was something like a very fast transfer of power resulting just from upgraded computing power moving from chimp-ancestor brain → human brain (a natural FOOM), which the discussion sometimes suggests. My understanding is that it’s more like the new adaptations allowed for cumulative cultural change, which allowed for more power.
Psychology/anthropology:
The misleading human-chimp analogy: AI will stand in relation to us the same way we stand in relation to chimps. I think this analogy basically ignores how humans have actually developed knowledge and power—not by rapid individual brain changes, but by slow, cumulative cultural changes. In turn, the analogy may lead us to make incorrect predictions about AI scenarios.
In addition to (farmed and wild) animal organizations, OPIS is worth checking out.
Here’s a list of organizations focusing on the quality of the long-term future (including the level of suffering), from this post:
If you are persuaded by the arguments that the expected value of human expansion is not highly positive or that we should prioritize the quality of the long-term future, promising approaches include research, field-building, and community-building, such as at the Center on Long-Term Risk, Center for Reducing Suffering, Future of Humanity Institute, Global Catastrophic Risk Institute, Legal Priorities Project, and Open Philanthropy, and Sentience Institute, as well as working at other AI safety and EA organizations with an eye towards ensuring that, if we survive, the universe is better for it. Some of this work has substantial room for more funding, and related jobs can be found at these organizations’ websites and on the 80,000 Hours job board.
I found this to be a comprehensive critique of some of the EA community’s theoretical tendencies (over-reliance on formalisms, false precision, and excessive faith in aggregation). +1 to Michael Townsend’s suggestions, especially adding a TLDR to this post.
Longtermism + EA might include organizations primarily focused on the quality of the long-term future rather than its existence and scope (e.g., CLR, CRS, Sentience Institute), although the notion of existential risk construed broadly is a bit murky and potentially includes these (depending on how much of the reduction in quality threatens “humanity’s potential”)
Cool diagram! I would suggest rephrasing the Longtermism description to say “We should focus directly on future generations.” As it is, it implies that people only work on animal welfare and global poverty because of moral positions, rather than concerns about tractability, etc.
Glad to have you here :D
I’m just going to plug some recommendations for suffering-focused stuff: You can connect with other negative utilitarians and suffering-focused people in this Facebook group, check out this career advice, and explore issues in ethics and cause prioritization here.
Julia Wise (who commented earlier) runs the EA Peer Support Facebook group, which could be good to join, and there are many other EA and negative utilitarian/suffering-focused community groups. Feel free to PM me!
Also spent hens are almost always sold to be slaughtered, where many are probably exposed to torture-level suffering. I remember looking into this a while back and only found one pasture farm where spent hens were not sold for slaughter. You can find details for many farms here: https://www.cornucopia.org/scorecard/eggs/
I think considerations like these are important to challenge the recent emphasis on grounding x-risk (really, extinction risk) in near-term rather than long-term concerns. That perspective seems to assume that the EV of human expansion is pretty much settled, so we don’t have to engage too deeply with more fundamental issues in prioritization, and we can instead just focus on marketing.
I’d like to see more written directly comparing the tractability and neglectedness of population risk reduction and quality risk reduction. I wonder if you’ve perhaps overstated things in claiming that a lower EV for human expansion suggests shifting resources to long-term quality risks rather than, say, factory farming. It seems like this claim requires a more detailed comparison between possible interventions.
Cool! Look forward to learning more about your work. Fyi your Discord invite link is broken.
On the other hand, this would exclude people whose main issue with longtermism is epistemic in nature. But maybe it’s too hard to come up with an acceptable catch-all term.
Also want to shout out @Holly Elmore ⏸️ 🔸 and PauseAI’s activism in getting people to call their senators. (You can commend this effort even if you disagree with an ultimate pause goal.) It could be worth following them for similar advocacy opportunities.