I have been thinking about this problem for a while, and I was wondering if someone who is more of an expert on this topic than me could explain to me whether I am onto something, or if not why I am wrong? I know very little about AI, but I do know a little about how reinforcement learners work. They work by getting rewards and punishments for certain behaviors, like if an AI that is trying to complete a race gets rewards when it crosses the finish line, and punishments when it goes off the track. It tries to maximize these rewards and minimize these punishments as much as possible.
This leads me to question one, which is why these rewards and punishments are not analogous to happiness and suffering in animals? After all, they do seem to work in pretty much the exact same way.
And that leads me to my second question: If these AI systems can experience happiness and suffering, then we could code them to feel extremely high levels of rewards, which from a utilitarian perspective would far outstrip any current charity in levels of happiness.
And if I am right about this, then we could make a perfect charity, one which simply could code AI to feel the equivalent of the best day of your life 1000 times per dollar donated (I am coming up with this figure off of the top of my head, but just think how easy it would be to do this, compared with flying thousands of aid workers to Africa and delivering bednets, or even convincing shrimp manufactures to use electrical stunners!)
Also, if I am right about this, I am only 14 and do not have the capabilities to build an effective charity, so please make the charity yourself if you have the capabilities!
Is AI sentience already a reality?
I have been thinking about this problem for a while, and I was wondering if someone who is more of an expert on this topic than me could explain to me whether I am onto something, or if not why I am wrong? I know very little about AI, but I do know a little about how reinforcement learners work. They work by getting rewards and punishments for certain behaviors, like if an AI that is trying to complete a race gets rewards when it crosses the finish line, and punishments when it goes off the track. It tries to maximize these rewards and minimize these punishments as much as possible.
This leads me to question one, which is why these rewards and punishments are not analogous to happiness and suffering in animals? After all, they do seem to work in pretty much the exact same way.
And that leads me to my second question: If these AI systems can experience happiness and suffering, then we could code them to feel extremely high levels of rewards, which from a utilitarian perspective would far outstrip any current charity in levels of happiness.
And if I am right about this, then we could make a perfect charity, one which simply could code AI to feel the equivalent of the best day of your life 1000 times per dollar donated (I am coming up with this figure off of the top of my head, but just think how easy it would be to do this, compared with flying thousands of aid workers to Africa and delivering bednets, or even convincing shrimp manufactures to use electrical stunners!)
Also, if I am right about this, I am only 14 and do not have the capabilities to build an effective charity, so please make the charity yourself if you have the capabilities!