It is not unthinkably improbable that an elephant brain where reinforcement from a positive or negative stimulus adjust millions of times as many neural computations could be seen as vastly more morally important than a fruit fly, just as one might think that a fruit fly is much more important than a thermostat (which some suggest is conscious and possesses preferences). Since on some major functional aspects of mind there are differences of millions of times, that suggests a mean expected value orders of magnitude higher for the elephant if you put a bit of weight on the possibility that moral weight scales with the extent of, e.g. the computations that are adjusted by positive and negative stimuli.
This specific kind of account, if meant to depend inherently on differences in reinforcement, is very improbable to me (<0.1%), and conditional on such accounts, the inherent importance of reinforcement would also very probably scale very slowly, with faster scaling increasingly improbable. It could work out that the expected scaling isn’t slow, but that would be because of very low probability possibilities.
The value of subjective wellbeing, whether hedonistic, felt desires, reflective evaluation/preferences, choice-based or some kind of combination, seems very probably logically independent from how much reinforcement happens EDIT: and empirically dissociable. My main argument is that reinforcement happens unconsciously and has no necessary or ~immediate conscious effects. We could imagine temporarily or permanently preventing reinforcement without any effect on mental states or subjective wellbeing in the moment. Or, we can imagine connecting a brain to an artificial neural network to add more neurons to reinforce, again to no effect.
And even within the same human under normal conditions, holding their reports of value or intensity fixed, the amount of reinforcement that actually happens will probably depend systematically on the nature of the experience, e.g. physical pain vs anxiety vs grief vs joy. If reinforcement has a large effect on expected moral weights, you could and I’d guess would end up with an alienating view, where everyone is systematically wrong about the relative value of their own experiences. You’d effectively need to reweight all of their reports by type of experience.
So, even with intertheoretic comparisons between accounts with and without reinforcement, of which I’d be quite skeptical specifically in this case but also generally, this kind of hypothesis shouldn’t make much difference (or it does make a substantial difference, but it seems objectionably fanatical and alienating). If rejecting such intertheoretic comparisons, as I’m more generally inclined to do and as Open Phil seems to be doing, it should make very little difference.
There are more plausible functions you could use, though, like attention. But, again, I think the cases for intertheoretic comparisons between accounts of how moral value scales with neurons for attention or probably any other function are generally very weak, so you should only take expected values over descriptive uncertainty conditional on each moral scaling hypothesis, not across moral scaling hypotheses (unless you normalize by something else, like variance across options). Without intertheoretic comparisons, approaches to moral uncertainty in the literature aren’t so sensitive to small probability differences or fanatical about moral views. So, it tends to be more important to focus on large probability shifts than improbable extreme cases.
This specific kind of account, if meant to depend inherently on differences in reinforcement, is very improbable to me (<0.1%), and conditional on such accounts, the inherent importance of reinforcement would also very probably scale very slowly, with faster scaling increasingly improbable. It could work out that the expected scaling isn’t slow, but that would be because of very low probability possibilities.
The value of subjective wellbeing, whether hedonistic, felt desires, reflective evaluation/preferences, choice-based or some kind of combination, seems very probably logically independent from how much reinforcement happens EDIT: and empirically dissociable. My main argument is that reinforcement happens unconsciously and has no necessary or ~immediate conscious effects. We could imagine temporarily or permanently preventing reinforcement without any effect on mental states or subjective wellbeing in the moment. Or, we can imagine connecting a brain to an artificial neural network to add more neurons to reinforce, again to no effect.
And even within the same human under normal conditions, holding their reports of value or intensity fixed, the amount of reinforcement that actually happens will probably depend systematically on the nature of the experience, e.g. physical pain vs anxiety vs grief vs joy. If reinforcement has a large effect on expected moral weights, you could and I’d guess would end up with an alienating view, where everyone is systematically wrong about the relative value of their own experiences. You’d effectively need to reweight all of their reports by type of experience.
So, even with intertheoretic comparisons between accounts with and without reinforcement, of which I’d be quite skeptical specifically in this case but also generally, this kind of hypothesis shouldn’t make much difference (or it does make a substantial difference, but it seems objectionably fanatical and alienating). If rejecting such intertheoretic comparisons, as I’m more generally inclined to do and as Open Phil seems to be doing, it should make very little difference.
There are more plausible functions you could use, though, like attention. But, again, I think the cases for intertheoretic comparisons between accounts of how moral value scales with neurons for attention or probably any other function are generally very weak, so you should only take expected values over descriptive uncertainty conditional on each moral scaling hypothesis, not across moral scaling hypotheses (unless you normalize by something else, like variance across options). Without intertheoretic comparisons, approaches to moral uncertainty in the literature aren’t so sensitive to small probability differences or fanatical about moral views. So, it tends to be more important to focus on large probability shifts than improbable extreme cases.