What concerns are there that you think the mechanize founders haven’t considered? I haven’t engaged with their work that much, but it seems like they have been part of the AI safety debate for years now, with plenty of discussion on this Forum and elsewhere (e.g. I can’t think of many AIS people that have been as active on this Forum as @Matthew_Barnett has been for the last few years). I feel like they have communicated their models and disagreements a (more than) fair amount already, so I don’t know what you would expect to change in further discussions?
You make a fair point, but what other tool do we have than our voice? I’ve read Matthew’s last post and skimmed through others. I see some concerning views, but I can also understand how he arrives at them. But what puzzles me often with some AI folks is the level of confidence needed to take such high-stakes actions. Why not err on the side of caution when the stakes are potentially so high?
Perhaps instead of trying to change someone’s moral views, we could just encourage taking moral uncertainty seriously? I personally lean towards hedonic act utilitarianism, yet I often default to ‘common sense morality’ because I’m just not certain enough.
I don’t have strong feelings on know how to best tackle this. I won’t have good answers to any questions. I’m just voicing concern and hoping others with more expertise might consider engaging constructively.
What concerns are there that you think the mechanize founders haven’t considered? I haven’t engaged with their work that much, but it seems like they have been part of the AI safety debate for years now, with plenty of discussion on this Forum and elsewhere (e.g. I can’t think of many AIS people that have been as active on this Forum as @Matthew_Barnett has been for the last few years). I feel like they have communicated their models and disagreements a (more than) fair amount already, so I don’t know what you would expect to change in further discussions?
You make a fair point, but what other tool do we have than our voice? I’ve read Matthew’s last post and skimmed through others. I see some concerning views, but I can also understand how he arrives at them. But what puzzles me often with some AI folks is the level of confidence needed to take such high-stakes actions. Why not err on the side of caution when the stakes are potentially so high?
Perhaps instead of trying to change someone’s moral views, we could just encourage taking moral uncertainty seriously? I personally lean towards hedonic act utilitarianism, yet I often default to ‘common sense morality’ because I’m just not certain enough.
I don’t have strong feelings on know how to best tackle this. I won’t have good answers to any questions. I’m just voicing concern and hoping others with more expertise might consider engaging constructively.