What do you think the risk is of AI accidents just adopting the baggage that AI risks has now via the euphemism treadmill?
I don’t think it’s an implausible risk, but I also don’t think that it’s one that should prevent the goal of a better framing.
What do you think the risk is of AI accidents just adopting the baggage that AI risks has now via the euphemism treadmill?
I don’t think it’s an implausible risk, but I also don’t think that it’s one that should prevent the goal of a better framing.