Yeah sure! Thanks so much!
Omnizoid
Checking it out now.
Things with a 50% chance of being very good aren’t pascal’s muggings! Your decision theory can’t be “Pascal’s muggin means I ignore everything with probability less than .5 of being good.”
But those guys almost definitely aren’t conscious. There’s a difference between how you reason about absurdly low probabilities and decent probabilities.
(I also think that we shouldn’t a priori rule out that the world might be messy such that we’re constnatly inadvertently harming huge numbers of conscious creatures).
I do not think a joking throwaway reference to a statement from the upcoming vice president is offensive.
I was assuming both buttons are available. Specifically, suppose Bob exists:
Bob getting an extra 1 util and Todd being created with a util is better than that not happening.
Todd being created with 3 utils is better than the scenario in 1.
//This isn’t true. I can just deny the independence of irrelevant alternatives instead.//
That doesn’t help. The world where only button 1 is pressed is better than the world where neither is pressed, the world where both are pressed is better than the world where only button 1 is pressed, so by transitivity, an extra happy person is good.
You can always deny any intuition, but I’d hope this would convince people without fairly extreme views.
I address that in the article. FIrst of all, so long as we buy the transitivity of the better than relation that won’t work. Second, it’s highly counterintuitive that the addition of extra good options makes an action worse.
I find it crazy and I think nearly all people do.
Not all negative utilitarians deny that there exists such a thing as pleasure, they generally deny that it matters as much as pain. The view that there are no good states is crazy.
What do you make of the point I made here about why denying sequential desirability is implausible (if implies you should press one button C which simply presses A and B but that you should press A and B) and the reasoning for why your view commits you to denying the transitivity of the better than relation (I also make a third point in the paper).
I think illusionism is extremely crazy, but even if you adopt it, I don’t know why it dissolves the problem more to say “what we think of as consciousness is really just the brain modelling itself,” rather htan “what we think of as consciousness is really the brain integrating information.”
I think there’s a difference between access and phenomenal consciousness. You can have bits of your visual field, for instance, that you’re not introspectively aware of but are part of your consciousness. You also can have access consciousness that you can’t talk about (e.g. if you can’t speak). Not sure why we’d deny that animals have access consciousness.
Yeah it’s very bizarre. Seems just to be vibes.
I don’t think this is right. We could imagine a very simple creature experience very little pain but be totally focused on it. It’s true that normally for creatures like us, we tend to focus more on more intense pain, but this doesn’t mean that’s the relevant benchmark for intensity. My claim is the causal arrow goes the other way.
But if I did, I think this would make me think animal consciousness is even more serious. For simple creatures, pain takes up their whole world.
RP had some arguments against conscious subsystems affecting moral weight very significantly that I found pretty convincing.
In regards to your first point, I don’t see either why we’d think that degree of attention correlates with neuron counts or determines the intensity of consciousness
Interesting! I intended the post largely as a response to someone with views like yours. In short, I think the considerations I provided based on how animals behave is very well explained by the supposition that they’re conscious. I also find RP’s arguments against neuron counts completely devastating.
Gotcha, makes sense! And I now see how to manipulate the spreadsheet.
I tried to do that but ended up a bit confused about what numbers I was using for stuff (I never really properly learned how spreadsheets worked). If I agree with you about the badness of excruciating pain but think you underrated disabling pain by ~1 order of magnitude, do the results still turn out with shrimp welfare beating other stuff?
Awesome!
I liked your analysis. No worries if this would be too difficult, but it might be helpful to make a website where you can easily switch around the numbers surrounding how the different kinds of suffering compare to each other and plug in the result.
I agree with most of your estimates but I think you probably underrated how bad disabling pain is. Probably it’s ~500 times worse than normal life. Not sure how that would affect the calculations.
Hmm, I guess none of those happening seems decently likely to me—around 50% probability.