I am a philosophy researcher at the Centre for Animal Ethics at Pompeu Fabra University, pursuing research and interested in Well-being, AI Welfare, Global Priorities, Animal Ethics, the alignment problem and the long-term. I am also a philosophy undergrad at the University of Barcelona. To see my publications, go to https://philpeople.org/profiles/adria-r-moret
Adrià Moret
Karma: 183
Digital Consciousness Model Results and Key Takeaways
AI Alignment: The Case for Including Animals
AI Welfare Risks
Perfect!
It’s more or less similar. I do not focus that much on the moral dubiousness of “happy servants”. Instead, I try to show that standard alignment methods or preventing near-future AIs with moral patienthood from taking actions they are trying to take, causes net harm to the AIs according to desire satisfactionism, hedonism and objective list theories.
I think this would be a really good idea. As you hint at, it could allow people to better calibrate re the importance/significance of evidence, which is weaker than in the case of human adults. I also expect a model that gives more plausible results for newborns/infants to also give more plausible results for chickens.