What do you think of taking the log of neuron count dividing that by neural complexity and adding the total wellbeing impact of the individual to get the relative moral value? Intuitively, this can make sense:
1) The more neurons, the more the individual can feel (but the intensity of perception can increase slower than the number of neurons).
2) The higher the neural complexity—which can correlate with one’s ability to feel better about exteroceptive stimuli because they have more, either rational or emotional/intuitive experience (that is either due to ancestors’ experience or the individual’s life) to ‘deal with them,’ the less intensely the individual perceives.[1]
3) The impact of the individual on net wellbeing[2] should be added. I am suggesting this weighting.
For humans, especially privileged ones, 3) could make the contribution from individual’s wellbeing negligible in the total because they can have much more influence on others. For individuals with less choices, including confined non-human animals,[3] on the contrary, the contribution of 3) can be neglected because these animals do not influence others.
How does this compare with what you found (and how is either finding more accurate)?
The devil can be in determining the counterfactual to use. For humans, this can be i) e. g. impact due to action, ii) due to inaction (to what extent that should be understood in a utilitarian way), iii) due to unfulfilled potential—e. g. someone did not study to be able to influence decisionmakers even though they could, or iv) due to unfulfilled capacity—e. g. someone who studied and can influence decisionmakers choses another job. For animals, this can be similar, except that animal’s free will is intuitively understood as lower. For example, if a chicken in a crammed barn chose to try killing others instead of upskilling them in teaching young chicken to prevent diseases, it can be attributed to norms and environment set by the human caretakers rather than the choice of the chicken.
This sounds great to me but I’m not the author, I just run the journal. We’d love to have you share your review of the article—“To register, please email info@theseedsofscience.org with your name, title (can be anything/optional), institution (same as title), and link (personal website, twitter, or linkedin is fine) for your listing on the gardeners page. From there, it’s pretty self-explanatory—I will add you to the mailing list and send you an email that includes the manuscript, our publication criteria, and a simple review form for recording votes/comments.”
What do you think of taking the log of neuron count dividing that by neural complexity and adding the total wellbeing impact of the individual to get the relative moral value? Intuitively, this can make sense:
1) The more neurons, the more the individual can feel (but the intensity of perception can increase slower than the number of neurons).
2) The higher the neural complexity—which can correlate with one’s ability to feel better about exteroceptive stimuli because they have more, either rational or emotional/intuitive experience (that is either due to ancestors’ experience or the individual’s life) to ‘deal with them,’ the less intensely the individual perceives.[1]
3) The impact of the individual on net wellbeing[2] should be added. I am suggesting this weighting.
For humans, especially privileged ones, 3) could make the contribution from individual’s wellbeing negligible in the total because they can have much more influence on others. For individuals with less choices, including confined non-human animals,[3] on the contrary, the contribution of 3) can be neglected because these animals do not influence others.
How does this compare with what you found (and how is either finding more accurate)?
This can correlate with 3) but due to a sum, there should be no double-counting.
The devil can be in determining the counterfactual to use. For humans, this can be i) e. g. impact due to action, ii) due to inaction (to what extent that should be understood in a utilitarian way), iii) due to unfulfilled potential—e. g. someone did not study to be able to influence decisionmakers even though they could, or iv) due to unfulfilled capacity—e. g. someone who studied and can influence decisionmakers choses another job. For animals, this can be similar, except that animal’s free will is intuitively understood as lower. For example, if a chicken in a crammed barn chose to try killing others instead of upskilling them in teaching young chicken to prevent diseases, it can be attributed to norms and environment set by the human caretakers rather than the choice of the chicken.
Assuming that they cannot influence the wellbeing of others, e. g. by presenting positive attitude.
This sounds great to me but I’m not the author, I just run the journal. We’d love to have you share your review of the article—“To register, please email info@theseedsofscience.org with your name, title (can be anything/optional), institution (same as title), and link (personal website, twitter, or linkedin is fine) for your listing on the gardeners page. From there, it’s pretty self-explanatory—I will add you to the mailing list and send you an email that includes the manuscript, our publication criteria, and a simple review form for recording votes/comments.”
Thank you. I encourage you to
1) Encourage authors of EA-related articles to make their work publicly accessible
2) Post summaries of relevant articles on the EA Forum to facilitate discussion without the need to register and further ease the work of gardeners