It’s awkward to interpret mathematical judgements about a value that is described as an unknown and then as a supposition about one’s internal process of deciding an arbitrary value for the unknown and finally as a possible range varying over a large magnitude for that unknown. That is what I decided that the report on consciousness (and the speculation about moral weights) describes.
I would like to learn more about how EA folks typically assign evidence for the presence of different kinds of consciousness or moral weight of different species. In particular, what evidence helps you decide the presence of different aspects of consciousness in specific amounts? What evidence helps you decide the moral weight of a person of one species relative to another?
Finally, What is EA speculation about more traditional models of morality that rely on a moral identity, judgements of right and wrong, and in particular, the symbolic importance of actions, even when they have (potentially) minimal verifiable consequences for others (for example, catching a fly and releasing it outside)?
In particular, what evidence helps you decide the presence of different aspects of consciousness in specific amounts?
What evidence helps you decide the moral weight of a person of one species relative to another?
In Luke’s post, clock speed of consciousness, unity of consciousness, and unity-independent intensity of valenced aspects of consciousness are the factors based on which the quantiles for the moral weight were defined, I think.
Finally, What is EA speculation about more traditional models of morality that rely on a moral identity, judgements of right and wrong, and in particular, the symbolic importance of actions, even when they have (potentially) minimal verifiable consequences for others (for example, catching a fly and releasing it outside)?
Personally, I put more than 90 % of weight on total hedonic utilitarianism (classical utilitarianism). However, in practice, the full consequences of my actions are really hard to measure, so I very often (always?) rely on heuristics to decide what to do (especially when there are “minimal verifiable [or measurable] consequences”).
Note that cost-effectiveness analyses or other quantitative methods are still heuristics, not definitive answers, because they are always incomplete.
I took the clock speed, unity, and intensity factors to be the aspects of consciousness about which one gathered evidence.
Total hedonic utilitarianism is mathematically interesting. I should explore its logical implications.
I appreciate what you describe as heuristics. In my everyday life I apply heuristics.
Morality is informed by heuristics that determine consequences of actions or by heuristics that determine the symbolic content of actions (their subjective or intersubjective meaning).
EDIT: morality is also informed by heuristics that determine intentions of actions, irrespective of consequences of actions, but that was not my interest here.
I wonder what heuristics the EA community officially acknowledge as relevant to understanding the level of consciousness or moral weight of beings from other species.
It’s awkward to interpret mathematical judgements about a value that is described as an unknown and then as a supposition about one’s internal process of deciding an arbitrary value for the unknown and finally as a possible range varying over a large magnitude for that unknown. That is what I decided that the report on consciousness (and the speculation about moral weights) describes.
I would like to learn more about how EA folks typically assign evidence for the presence of different kinds of consciousness or moral weight of different species. In particular, what evidence helps you decide the presence of different aspects of consciousness in specific amounts? What evidence helps you decide the moral weight of a person of one species relative to another?
Finally, What is EA speculation about more traditional models of morality that rely on a moral identity, judgements of right and wrong, and in particular, the symbolic importance of actions, even when they have (potentially) minimal verifiable consequences for others (for example, catching a fly and releasing it outside)?
Thanks for commenting!
In Luke’s post, clock speed of consciousness, unity of consciousness, and unity-independent intensity of valenced aspects of consciousness are the factors based on which the quantiles for the moral weight were defined, I think.
Personally, I put more than 90 % of weight on total hedonic utilitarianism (classical utilitarianism). However, in practice, the full consequences of my actions are really hard to measure, so I very often (always?) rely on heuristics to decide what to do (especially when there are “minimal verifiable [or measurable] consequences”).
Note that cost-effectiveness analyses or other quantitative methods are still heuristics, not definitive answers, because they are always incomplete.
I took the clock speed, unity, and intensity factors to be the aspects of consciousness about which one gathered evidence.
Total hedonic utilitarianism is mathematically interesting. I should explore its logical implications.
I appreciate what you describe as heuristics. In my everyday life I apply heuristics.
Morality is informed by heuristics that determine consequences of actions or by heuristics that determine the symbolic content of actions (their subjective or intersubjective meaning).
EDIT: morality is also informed by heuristics that determine intentions of actions, irrespective of consequences of actions, but that was not my interest here.
I wonder what heuristics the EA community officially acknowledge as relevant to understanding the level of consciousness or moral weight of beings from other species.