Iâm curating this post. The issue of moral weights turned out to be a major crux in the AW vs GH debate, and Iâm excited about more progress being made now that some live disagreements have been surfaced. Iâm curating this post and @titotalâs (link) as the âbest in classâ from the debate week on this topic.
On this post: The post itself does a good job of laying out some reasonable-to-hold objections to RPâs moral weights. Particularly I think the point about discounting behavioural proxies is important and likely to come up again in future.
I think comment section is also very interesting, there are quite a few good threads: 1. @David Mathersđ¸â˛ comment which raises the point that the idea of estimating intensity/âsize of experience from neuron counts doesnât come up (much) in the academic literature. This was surprising to me! 2. @Bob Fischerâs counterpoint making the RP case 3. This thread which gets into the issue of what counts as an uninformed prior wrt moral weights
I actually feel mildly guilty for my comment. Itâs not like Iâve done a proper search, itâs not something I worked on directly, and I dislike neuron count weighting from a more inside view persepctive, so itâs possible my memory is biased here. Not to mention that I donât actually no of any philosophers (beyond Bob and other people at RP themselves) who explicitly deny neuron count weighting. Donât update TOO much on me here!
Separate from your comment, I have seen comments like this elsewhere (albeit also mainly from Bob and other RP people), so I still think itâs interesting additional evidence that this is a thing.
It seems like some people find it borderline inconceivable that neuron counts correspond to higher experience intensity/âsize/âmoral weight, and some people find it inconceivable the other way. This is pretty interesting!
Iâm curating this post. The issue of moral weights turned out to be a major crux in the AW vs GH debate, and Iâm excited about more progress being made now that some live disagreements have been surfaced. Iâm curating this post and @titotalâs (link) as the âbest in classâ from the debate week on this topic.
On this post: The post itself does a good job of laying out some reasonable-to-hold objections to RPâs moral weights. Particularly I think the point about discounting behavioural proxies is important and likely to come up again in future.
I think comment section is also very interesting, there are quite a few good threads:
1. @David Mathersđ¸â˛ comment which raises the point that the idea of estimating intensity/âsize of experience from neuron counts doesnât come up (much) in the academic literature. This was surprising to me!
2. @Bob Fischerâs counterpoint making the RP case
3. This thread which gets into the issue of what counts as an uninformed prior wrt moral weights
I actually feel mildly guilty for my comment. Itâs not like Iâve done a proper search, itâs not something I worked on directly, and I dislike neuron count weighting from a more inside view persepctive, so itâs possible my memory is biased here. Not to mention that I donât actually no of any philosophers (beyond Bob and other people at RP themselves) who explicitly deny neuron count weighting. Donât update TOO much on me here!
Separate from your comment, I have seen comments like this elsewhere (albeit also mainly from Bob and other RP people), so I still think itâs interesting additional evidence that this is a thing.
It seems like some people find it borderline inconceivable that neuron counts correspond to higher experience intensity/âsize/âmoral weight, and some people find it inconceivable the other way. This is pretty interesting!