Feel free to message me on here.
JackM
Well the closest analogue we have today is factory farmed animals. We use them in a way that causes tremendous suffering. We don’t really mean to cause the suffering, but it’s a by product of how we use them.
And another, perhaps even better, analogue is slavery. Maybe we’ll end up essentially enslaving digital minds because it’s useful to do so—if we were to give them too much freedom they wouldn’t as effectively do what we want them to do.
Creating digital minds just so that they can live good lives is a possibility, but I’d imagine if you would ask someone on the street if we should do this, they’d look at you like you were crazy.
Again, I’m not sure how things will pan out, and I would welcome strong arguments that suffering is unlikely, but it’s something that does worry me.
Do you agree that the experience of digital minds likely dominates far future calculations?
This leads me to want to prioritize making sure that if we do create digital minds, we do so well. This could entail raising the moral status of digital minds, improving our ability to understand sentience and consciousness, and making sure AI goes well and can help us with these things.
Extinction risk becomes lower importance to me. If we go extinct we get 0 value from digital minds which seems bad, but it also means we avoid the futures where we create them and they suffer. It’s hard to say if we are on track to creating them to flourish or suffer—I think there are arguments on both sides. The futures where we create digital minds may be the ones where we wanted to “use” them, which could mean them suffering. Alternatively, we have seen our moral circle expand over time and this may continue, so there is a real possibility we could create them to flourish. I don’t have a clear view which side wins here, so overall going extinct doesn’t seem obviously terrible to me.
This is a question I could easily change my mind on.
The experience of digital minds seems to dominate far future calculations. We can get a lot of value from this, a lot of disvalue, or anything in between.
If we go extinct then we get 0 value from digital minds. This seems bad, but we also avoid the futures where we create them and they suffer. It’s hard to say if we are on track to creating them to flourish or suffer—I think there are arguments on both sides. The futures where we create digital minds may be the ones where we wanted to “use” them, which could mean them suffering. Alternatively, we have seen our moral circle expand over time and this may continue, so there is a real possibility we could create them to flourish. I don’t have a clear view which side wins here, so overall going extinct doesn’t seem obviously terrible to me.
We could instead focus on raising the moral status of digital minds, our ability to understand sentience and consciousness, improve societal values, making sure AI goes well and helps us with these things. These robustly increase the expected value of digital sentience in futures where we survive.
So because reducing extinction risk is close to 0 expected value to me, and increasing the value of futures where we survive is robustly positive in expected value, I lean towards increasing the value of futures where we survive.
This gives the impression that longtermism is satisfied with prioritising one option in comparison to another, regardless of the context of other options which if considered would produce outcomes that are “near-best overall”. And as such it’s a somewhat strange claim that one of the best things you could do for the far future is in actuality “not so great”.
Longtermism should certainly prioritise the best persistent state possible. If we could lock-in a state of the world where there were the maximum number of beings with maximum wellbeing of course I would do that, but we probably can’t.
Ultimately the great value from a longtermist intervention does comes from comparing it to the state of the world that would have happened otherwise. If we can lock-in value 5 instead of locking in value 3, that is better than if we can lock-in value 9 instead of locking value 8.
At it’s heart, the “inability to predict” arguments really hold strongly onto the sense that the far future is likely to be radically different and therefore you are making a claim to having knowledge of what is ‘good’ in this radically different future.
I think we just have different intuitions here. The future will be different, but I think we can make reasonable guesses about what will be good. For example, I don’t have a problem with a claim that a future where people care about the wellbeing of sentient creatures is likely to be better than one where they don’t. If so, expanding our moral circle seems important in expectation. If you’re asking “why”—it’s because people who care about the wellbeing of sentient creatures are more likely to treat them well and therefore more likely to promote happiness over suffering. They are also therefore less likely to lock-in suffering. And fundamentally I think happiness is inherently good and suffering inherently bad and this is independent of what future people think. I don’t have a problem with reasoning like this, but if you do then I just think our intuitions diverge too much here.
Thus, while reducing risk associated with asteroid impacts has immediate positive effects, the net effect on the far future is more ambiguous.
Maybe fair, but if that’s the case I think we need to find those interventions that are not very ambiguous. Moral circle expansion seems one of those that is very hard to argue against. (I know I’m changing my interventions—it doesn’t mean I don’t think the previous ones I said are still good, I’m just trying to see how far your scepticism goes).
For a simple example, as soon as humans start living comfortably, in addition to but beyond Earth (for example on Mars), the existential risk from an asteroid impact declines dramatically, and further declines are made as we extend out further through the solar system and beyond. Yet the expected value is calculated on the time horizon whereby the value of this action, reducing risk from asteroid impact, will endure for the rest of time, when in reality, the value of this action, as originally calculated, will only endure for probably less than 50 years.
Considering this particular example—If we spread out to the stars then x-risk from asteroids drops considerably as no one asteroid can kill us all—that is true. But the value of the asteroid reduction intervention is borne from actually getting us to that point in the first place. If we hadn’t reduced risk from asteroids and had gone extinct then we’d have value 0 for the rest of time. If we can avert that and become existentially secure than we have non-zero value for the rest of the time. So yes, we would have indeed done an intervention that has impacts enduring for the rest time. X-risk reduction interventions are trying to get us to a point of existential security. If they do that, their work is done.
Conditional on fish actually being able to feel pain, it seems a bit far-fetched to me that a slow death in ice wouldn’t be painful.
I was trying to question you on the duration aspect specifically. If electric shock lasts a split second is it really credible that it could be worse than a slow death through some other method?
though I’ll happily concede it’s a longer process than electrical stunning
Isn’t this pretty key? If “Electrical stunning reliably renders fish unconscious in less than one second” as Vasco says, I don’t see how you can get much better than that in terms of humane slaughter.
Or are you saying that electrical stunning is plausibly so bad even in that split second so as to make it potentially worse than a much slower death from freezing?
I’m a bit confused if I’m supposed to be answering on the basis of my uninformed prior or some slightly informed prior or even my posterior here. Like I’m not sure how much you want me to answer based on my experience of the world.
For an uninformed prior I suppose any individual entity that I can visually see. I see a rock and I think “that could possibly be conscious”. I don’t lump the rock with another nearby rock and think maybe that ‘double rock’ is conscious because they just visually appear to me to be independent entities as they are not really visually connected in any physical way. This obviously does factor in some knowledge of the world so I suppose it isn’t a strict uninformed prior, but I suppose it’s about as uninformed as is useful to talk about?
Yeah if I were to translate that into a quantitative prior I suppose it would be that other individuals have roughly 50% of being conscious (I.e. I’m agnostic on if they are or not).
Then I learn about the world. I learn about the importance of certain biological structures for consciousness. I learn that I act in a certain way when in pain and notice other individuals do as well etc. That’s how I get my posterior that rocks probably aren’t conscious and pigs probably are.
I certainly don’t put 0 probability on that possibility.
I agree uninformed prior may not be a useful concept here. I think the true uninformed prior is “I have no idea what is conscious other than myself”.
How far and how to generalize for an uninformed prior is pretty unclear. I could say just generalize to other human males because I can’t experience being female. I could say generalize to other humans because I can’t experience being another species. I could say generalize to only living things because I can’t experience not being a living thing.
If you’re truly uniformed I don’t think you can really generalize at all. But in my current relatively uninformed state I generalize to those that are biologically similar to humans (e.g. central nervous system) as I’m aware of research about the importance of this type of biology within humans for elements of consciousness. I also generalize to other entities that act in a similar way to me when in supposed pain (try to avoid it, cry out, bleed annd become less physically capable etc.).
To be honest I’m not very well-read on theories of consciousness.
I don’t see why we should generalise from our experience to the idea that individual organisms are the right boundary to draw.
For an uninformed prior that isn’t “I have no idea” (and I suppose you could say I’m uninformed myself!) I don’t think we have much of an option but to generalise from experience. Being able to say it might happen at other levels seems a bit too “informed” to me.
Most EAs I speak to seem to have similarly-sized bugbears?
Maybe I don’t speak to enough EAs, which is possible. Obviously many EAs think our overall allocation isn’t optimal, but I wasn’t aware that many EAs think we are giving tens of millions of dollars to interventions/areas that do NO good in expectation (which is what I mean by “burning money”).
Maybe the burning money point is a bit of a red herring though if the amount you’re burning is relatively small and more good can be done by redirecting other funds, even if they are currently doing some good. I concede this point.
To be honest you might be right overall that people who don’t think our funding allocation is perfect tend not to write on the forum about it. Perhaps they are just focusing on doing the most good by acting within their preferred cause area. I’d love to see more discussion of where marginal funding should go though. And FWIW one example of a post that does cover this and was very well-received was Ariel’s on the topic of animal welfare vs global health.
It’s the only part of my comment that argues Jeff was effectively saying he would have to “be” an animal advocate, which is exactly what you’re arguing against.
So I guess my best reply is just to point you back to that...
Oh well, was nice chatting.
You seem to have ignored the bit I made in bold in my previous comment
OK so he says he would primarily be “known” as an anti-animal advocate not “become” one.
This is especially the case because If I did a good job at this I might end up primarily known for being an anti-animal advocate
But he then also says the following (bold emphasis mine):
I also expect that, again, conditional on doing a good job of this, I would need to spend a lot of time as a representative of this position: responding to the best counter arguments, evaluating new information as it comes up, people wanting me to participate in debates, animal advocates thinking that changing my mind is really very important for making progress toward their goals. These are similarly not where I want to put my time and energy, either for altruistic reasons personal enjoyment.
I’m struggling to see how what I said isn’t accurate. Maybe Jeff should have said “I would feel compelled to” rather than “I would need to”.
To clarify, when I asked if you could do a good job I meant can you put together a convincing argument that might give some people like me pause for thought (maybe this is indeed how you understood me).
If you think you can, I would strongly encourage you to do so. As per another comment of mine, tens of millions of dollars goes towards animal welfare within EA each year. If this money is effectively getting burned it is very useful for the community to know. Also, there is no convincing argument that animals are not moral patients on this forum (or indeed anywhere else) that I am aware of, so your view is exceedingly neglected. I think you could really do a whole lot of good if you do have a great argument up your sleeve.
Your argument that you would effectively be forced into becoming an anti-animal advocate if you convincingly wrote up your views—sorry I don’t really buy it. For example, I don’t think Luke Muehlhauser has been forced into becoming a pro-animal advocate, in the way you hypothesise that you would, after writing his piece. This just seems like too convenient an excuse, sorry.
Of course you’re not under any obligation to write anything (well...perhaps some would argue you are, but I’ll concede you’re not). But if I thought I had a great argument up my sleeve, mostly ignored by the community, which, if true, would mean we were effectively burning tens of millions of dollars a year, I know I’d write it up.
Thanks for your response.
This is especially the case because If I did a good job at this I might end up primarily known for being an anti-animal advocate
I’d be interested to know how likely you think it is that you could do a “good job”. You say you have a “bundle of intuitions and thoughts” which doesn’t seem like much to me.
I’m also very surprised you put yourself at the far end of the spectrum in favor of global health > animal welfare based on a “bundle of intuitions and thoughts” on what is ultimately a very difficult and important question.[1] In your original comment you say “This isn’t as deeply a considered view as I’d like”. Were you saying you haven’t considered deeply enough or that the general community hasn’t?
And thanks for the sketch of your reasoning but ultimately I don’t think it’s very helpful without some justification for claims like the following:
I think it is pretty unlikely that very young children, in their first few months, have this kind of inner experience.
- ^
I also put myself at the fair end of the spectrum in the other direction so I feel I should say something about that. I think arguments for animal sentience/moral patienthood are pretty strong (e.g. see here for a summary) and I would not say I’m relying on intuition. I’m not of course sure that animals are moral patients, but even if you put a small probability on it, the vast numbers of animals being treated poorly can justifiably lead to a strong view that resources for animal welfare are better in expectation than resources for global health. Ultimately for this argument not to work based on believing animals aren’t moral patients, I think you probably need to be very confident of this to counteract the vast numbers of animals that can be helped.
Animal welfare researchers need sentience to be true, otherwise all the work they are doing is worth a lot less.
That is fair, but there are several additional reasons why most people would want it that animals are not moral patients:
They can continue to eat them guilt-free and animals are tasty.
People can give to global health uncertainty-free and get “fuzzies” from saving human lives with pretty high confidence (I think we naturally get more fuzzies by helping people of our own species).
We wouldn’t as a human species then be committing a grave moral atrocity which would be a massive relief.
There aren’t really similar arguments for wanting animals to be moral patients (other than “I work on animal welfare”) but I would be interested if I’m missing any relevant ones.
You’re basically saying happier machines will be more productive and so we are likely to make them to be happy?
Firstly we don’t necessarily understand consciousness enough to know if we are making them happy, or even if they are conscious.
Also, I’m not so sure if happier means more productive. More computing power, better algorithms and more data will mean more productive. I’m open to hearing arguments why this would also mean the machine is more likely to be happy.
Maybe the causality goes the other way—more productive means more happy. If machines achieve their goals they get more satisfaction. Then maybe happiness just depends on how easy the goals we give it is. If we set AI on an intractable problem and it never fulfills it maybe it will suffer. But if AIs are constantly achieving things they will be happy.
I’m not saying you’re wrong just that it seems there’s a lot we still don’t know and the link between optimization and happiness isn’t straightforward to me.