You make a fair point, but what other tool do we have than our voice? I’ve read Matthew’s last post and skimmed through others. I see some concerning views, but I can also understand how he arrives at them. But what puzzles me often with some AI folks is the level of confidence needed to take such high-stakes actions. Why not err on the side of caution when the stakes are potentially so high?
Perhaps instead of trying to change someone’s moral views, we could just encourage taking moral uncertainty seriously? I personally lean towards hedonic act utilitarianism, yet I often default to ‘common sense morality’ because I’m just not certain enough.
I don’t have strong feelings on know how to best tackle this. I won’t have good answers to any questions. I’m just voicing concern and hoping others with more expertise might consider engaging constructively.
You make a fair point, but what other tool do we have than our voice? I’ve read Matthew’s last post and skimmed through others. I see some concerning views, but I can also understand how he arrives at them. But what puzzles me often with some AI folks is the level of confidence needed to take such high-stakes actions. Why not err on the side of caution when the stakes are potentially so high?
Perhaps instead of trying to change someone’s moral views, we could just encourage taking moral uncertainty seriously? I personally lean towards hedonic act utilitarianism, yet I often default to ‘common sense morality’ because I’m just not certain enough.
I don’t have strong feelings on know how to best tackle this. I won’t have good answers to any questions. I’m just voicing concern and hoping others with more expertise might consider engaging constructively.