Discussion Thread: AI Welfare Debate Week
Weâve had a lot of votes on the banner! If youâd like to explain why you voted the way you did, what your cruxes[1] are, and what would change your mind, comment in this thread.
You can also mention if youâd be open for having a dialogue with another Forum user who disagrees with you. If someone comments and offers to dialogue with you, you can set up a time to write a dialogue together (perhaps via Forum dms).
To find out more about the event, and how to contribute, read the announcement post.
- ^
Beliefs or assumptions which determine your overall opinion, but are better targets for argument/â you would more easily change your mind on. For example, one of mine is âphilosophy of mind doesnât make progressâ.
Okâat Tobyâs encouragement, here are my thoughts:
This is a very old point, but to my mind, at least from a utilitarian perspective, the main reason itâs worth working on promoting AI welfare is the risk of foregone upside. I.e. without actively studying what constitutes AI welfare and advocating for producing it, we seem likely to have a future thatâs very comfortable for ourselves and our descendantsâfully automated luxury space communism, if you likeâbut which contains a very small proportion of the value that could have been created by creating lots of happy artificial minds. So concern for creating AI welfare seems likely to be the most important way in which utilitarian and human-common-sense moral recommendations differ.
It seems to me that the amount of value we could create if we really optimized for total AI welfare is probably greater than the amount of disvalue weâll create if we just use AI tools and allow for suffering machines by accident, since in the latter case the suffering would be a byproduct, not something anyone optimizes for.
But AI welfare work (especially if this includes moral advocacy) just for the sake of avoiding this downside also seems valuable enough to be worth a lot of effort on its own, even if suffering AI tools are a long way off. The animal analogy seems relevant: itâs hard to replace factory farming once people have started eating a lot of meat, but in India, where Hinduism has discouraged meat consumption for a long time, less meat is consumed and so factory farming is evidently less widespread.
So in combination, I expect AI welfare work of some kind or another is probably very important. I have almost no idea what the best interventions would be or how cost-effective they would be, so I have no opinion on exactly how much work should go into them. I expect no one really knows at this point. But at face value the topic seems important enough to warrant at least doing exploratory work until we have a better sense of what can be done and how cost-effective it could be, only stopping in the (I think unlikely) event that we can say with some confidence that the best AI welfare work to be done is worse than the best work that can be done in other areas.
When telling stories like your first paragraph, I wish people either said âalmost all of the galaxies we reach are tiled with some flavor of computronium and hereâs how AI welfare work affected the flavorâ or âit is not the case that almost all of the galaxies we reach are tiled with some flavor of computronium and hereâs why.â
The universe will very likely be tiled with some flavor of computronium is a crucial consideration, I think.
To my mind, the first point applies to whatever resources are used throughout the future, whether itâs just the earth or some larger part of the universe.
I agree that the number/âimportance of welfare subjects in the future is a crucial consideration for how much to do longtermist as opposed to other work. But when comparing longtermist interventionsâsay, splitting a budget between lowering the risk of the world ending and proportionally increasing the fraction of resources devoted to creating happy artificial mindsâit would seem to me that the âsize of the futureâ typically multiplies the value of both interventions equally, and so doesnât matter.
(Not an AI welfare/âsafety expert by any stretch, just adding my two cents here! Also I was very piqued by the banner and loved hovering over the footnote! Iâve thought about digital sentience, but this banner and this week really put me into a âhmm...â state)
My view leans towards âmoderately disagree.â (I fluctuated between this, neutral, and slightly agree.) For context, when itâs AI safety, Iâd say âhighly agree.â Thoughts behind my current position:
Why Iâd prioritize it less:
I consider myself longtermist, but I have always grappled with the opportunity costs of highly prioritizing more âspeculativeâ areas. I care about high EV areas, I also grapple with deprioritizing very tangible cause areas with existing beings that have high EV too. When I looked at the table below, Iâd lean towards giving more resources towards AW versus making AI welfare right now a priority.
I also think about how, if we divert more resources into AI welfare, I worry about the ramifications of diverting more EAs into a very dense, specialized sector. While this specialization is important, I am concerned that it might sometimes lead to a narrower focus that doesnât fully account for the broader, interconnected systems of the world. In contrast, fields like biosecurity often consider a wider range of factors and have a more integrative perspective. This more holistic view can be crucial in addressing complex, multifaceted issues, and one reason I would prioritize AI welfare less is the opportunity costs towards areas that may be more holistic (not saying AI welfare has no reason to be considered holistic)
I have some concerns that trying to help AI right now might make things worse since we donât fully know yet whatâs being done now that can make things riskier? (Nathan said something to this effect in this thread).
I donât know to what extent AI welfare is irreversible compared to unaligned AI
It seems less likely for multiplanetary civilizations to develop with advanced AI, reducing likelihood of AI systems across the universe, which reduces my prioritizing AI welfare on a universal scale
Why Iâd still prioritize it:
I canât see myself prescribing a 0% chance AI would be sentient, and I canât see myself prescribing less than (edit:) 2% of resources and talent in effective altruism to something wide-scale Iâd hold a possibility of being considered sentient, even if it might be less standard (i.e. more outside average moral circles) because of big value creation, just generally preventing suffering, and potentially preventing additional happiness, all of which Iâm highly for.
I think exploratory and not very tapped-in work needs to be done more, and just establishing enough baseline infrastructure is important for this high EV type of cause (assuming we would say AI will be very widespread)
I like the trammellâs animal welfare analogy
Overall, I agree that resources and talent should be allocated to AI welfare because itâs prudent and can prevent future suffering. However, I moderately disagree with it being an EA priority due to its current speculative nature and how I think AI safety. I think AI safety and solving the alignment problem should be a priority, especially in these next few years, though, and hold some confidence in preventing digital suffering.
Other thoughts:
I wonder if thereâd ever be a conflict between AI welfare and human welfare or the welfare of other beings. Havenât put much thought here. Something that immediately comes to mind is if advanced AI systems would require substantial energy and infrastructure, potentially competing with human needs. From a utilitarian point of view, this presents a significant dilemma. However, thereâs the argument that solving AI alignment could mitigate these issues, ensuring that AI systems are developed and managed in ways that do not harm human welfare. My current thinking is that thereâs less likely potential for conflict between AI and human welfare if we solve the alignment problem and improve the policy infrastructure around AI. I might compare bioethics to historical precedents, showing that ethical alignment leads to better welfare outcomes
Some media that have made me truly feel for AI welfare are âI, Robot,â âHer,â Black Mirrorâs âJoan is Awful,â and âKlara and the Sunâ!
My reasons for not making it a priority:
it seems like we will be better placed to solve the issue in the future (we will understand AIs much better specifically, and also perhaps will just have much better intellectual tools generally),
it seems like most of the mistakes we can make by getting this wrong are mistakes we can fix if we get it right later on.
By contrast to existential risk, which we need to get right now or lose the opportunity (and all other opportunities) forever, I donât see a corresponding loss of option value here. Perhaps itâs worth thinking about how to ensure we preserve the will to solve the issue through whatever upheaval comes next. But I think thatâs much easier than actually trying to solve it right now.
edit: I think the first consideration isnât nearly as strong for poverty /â health interventions and animal interventions: it feels more like we already know some good things to do there so Iâm on board with starting now, especially in cases where we think their effects will compound over time.
Do you have a sense of what you think the right amout to spend is?
I think spending zero dollars (and hours) isnât obviously a mistake, but Iâd be willing to listen to someone who wanted to advocate for some specific intervention to be funded.
It seems really valuable to have experts at the time the discussion happens.
If you agree, then it seems worth trianing people for the future when we discuss it.
We can do that in the future too?
Training takes probably 3 years to cycle up and maybe 3 years to happen. When did we start deciding to train people in AI Safety, vs when was there enough?
Seems plausible to me that the AI welfare discussion happens before we might currently be ready.
But again youâre suggesting a time-limited window in which the AI welfare discussion happens, and if we donât intervene in that window itâll be too late. I just donât see a reason to expect that. I imagine that after the first AI welfare discussion, there will be others. While obviously itâs best to fix problems as soon as possible, I donât think itâs worth diverting resources from problems with a much clearer reason to believe we need to act now.
I think that being there at the start of a discussion is a great way to shift it. Look at AI safety (for good and ill)
For me, a key question is âHow much is 5%?â.
Here is a table I found.
So it seems like right now 5% is somewhere in the same range as Animal Welfare and EA Meta funding.
I guess that seems a bit high, given that animals exist and AIs donât.
I think a key benefit of AI work was training AI Safety folks to be around when needed. Having resources at a crucial moment isnât solely about money, itâs about having the resource that is useful in that moment. A similar thing to do might be to train philosophers and government staffers and activists who are well versed in the AI welfare arguments who can act if need be.
Not clear to me that that requires 5% of EA funding though.
this is super helpful! would be cool if we can see %s given to insect sentience or other smaller sub cause areas like that. does anyone have access to that?
Iâd guess a less than .5% (90%)
I think the burden of proof lies with those advocating for AI welfare as an EA priority.
So far, I havenât read compelling arguments to change my default.
Whatâs your thought on this:
Can you expand on this? Do you think that a model loaded onto a GPU could be conscious?
And do you think bacteria might be conscious?
I think given a big enough GPU, yes, it seems plausible to me. Our mids are memory stores and performing calculations. What is missing in terms of a GPU?
I think bacteria are unlikely to be conscious due to a lack of processing power.
Something unknown.
Do you think itâs plausible that a GPU rendering graphics is conscious? Or do you think that a GPU can only be conscious when it runs a model that mimics human behavior?
Potential counterargument: microbial intelligence.
I agree and I hope we get some strong arguments from those in favor. I would imagine there is already a bunch of stuff written given the recent Open Phil defunding it kerfuffle.
I have doubts about whether digital or electronic information flows will yield valenced sentience, although I donât rule it out.
But I have much stronger doubts about whether we can ever know what makes these âdigital mindsâ happy or sad, or even what goes in what direction.
Despite being a panpsychist, I rate it fairly low. I donât see a future where we solve AI safety where there are a lot of suffering AIs. If we fail on safety, then it wont matter what you wrote about AI welfare, the unaligned AI is not going to be moved by it.
My thoughts on the issue:
Humans are conscious in whatever way that word is normally used
Our brains are made of matter
I think itâs likely weâll be able to use matter to make other conscious minds
These minds may be able to replicate far faster than our own
A huge amount of future consciousness may be non-human
The wellbeing of huge chunks of funture consciousness are worthy of our concern
Forecasting is hard and itâs not clear how valuable early AI safety work was (maybe it led to much more risk ??)
Early work on AI welfare might quite plausibly make the problem worse.
Part of me would like to get better calibrated on which interventions work and which donât before hugely focusing on such an important issue
Part of me would like to fund general state capacity work and train experts to be in a better place when this happens