Meh, that makes it sound too narrowly technical—there are a lot of ways that advanced AI can cause problems, and they don’t all fit into the narrow paradigm of a system running into bugs/accidents that can be fixed with better programming.
This seems unnecessarily rude to me, and doesn’t engage with the post. For example, I don’t see the post anywhere characterising accidents as only coming from bugs in code, and it seems like this dismissal of the phrase ‘AI accidents’ would apply equally to ‘AI risk’.
For example, I don’t see the post anywhere characterising accidents as only coming from bugs in code, and it seems like this dismissal of the phrase ‘AI accidents’ would apply equally to ‘AI risk’.
But I didn’t say that the author is characterizing accidents as coming from bugs in code. I said that the language he is proposing has that effect. The author didn’t address this potential problem, so there was nothing for me to engage with.
it seems like this dismissal of the phrase ‘AI accidents’ would apply equally to ‘AI risk’.
It does in fact apply, since AI risk neglects important topics in AI ethics, but it doesn’t apply as strongly as it would for “AI accidents.”
Hi Kyle, I think that it’s worth us all putting effort into being friendly and polite on this forum, especially when we disagree with one another. I didn’t find your first comment informative or polite, and just commented to explain why I down-voted it.
Thanks Ben, for telling us that communities of do-gooders should be considerate. But I wasn’t inconsiderate. If you linked an article titled “why communities of do-gooders should be so insanely fragile that they can’t handle a small bit of criticism” then it would be relevant.
I didn’t find your first comment informative or polite, and just commented to explain why I down-voted it.
Yeah, and now I’m commenting to explain why I downvoted yours, and how you are failing to communicate a convincing point. If you found my first comment “rude” or impolite then you’ve lost your grip on ordinary conversation. Saying “meh” is not rude, yikes.
I agree that more of both is needed. Both need to be instantiated in actual code, though. And both are useless if researchers don’t care implement them.
I admit I would benefit from some clarification on your point—are you arguing that the article assumes a bug-free AI won’t cause AI accidents? Is it the case that this arose from Amodei et al.’s definition?: “unintended and harmful behavior that may emerge from poor design of real-world AI systems”. Poor design of real world AI systems isn’t limited to being bug-free, but I can see why this might have caused confusion.
are you arguing that the article assumes a bug-free AI won’t cause AI accidents?
I’m not—I’m saying that when you phrase it as accidents then it creates flawed perceptions about the nature and scope of the problem. An accident sounds like a onetime event that a system causes in the course of its performance; AI risk is about systems whose performance itself is fundamentally destructive. Accidents are aberrations from normal system behavior; the core idea of AI risk is that any known specification of system behavior, when followed comprehensively by advanced AI, is not going to work.
Meh, that makes it sound too narrowly technical—there are a lot of ways that advanced AI can cause problems, and they don’t all fit into the narrow paradigm of a system running into bugs/accidents that can be fixed with better programming.
This seems unnecessarily rude to me, and doesn’t engage with the post. For example, I don’t see the post anywhere characterising accidents as only coming from bugs in code, and it seems like this dismissal of the phrase ‘AI accidents’ would apply equally to ‘AI risk’.
“Rude?” Oh please, grow some thick skin.
But I didn’t say that the author is characterizing accidents as coming from bugs in code. I said that the language he is proposing has that effect. The author didn’t address this potential problem, so there was nothing for me to engage with.
It does in fact apply, since AI risk neglects important topics in AI ethics, but it doesn’t apply as strongly as it would for “AI accidents.”
Hi Kyle, I think that it’s worth us all putting effort into being friendly and polite on this forum, especially when we disagree with one another. I didn’t find your first comment informative or polite, and just commented to explain why I down-voted it.
https://www.centreforeffectivealtruism.org/blog/considering-considerateness-why-communities-of-do-gooders-should-be/
Thanks Ben, for telling us that communities of do-gooders should be considerate. But I wasn’t inconsiderate. If you linked an article titled “why communities of do-gooders should be so insanely fragile that they can’t handle a small bit of criticism” then it would be relevant.
Yeah, and now I’m commenting to explain why I downvoted yours, and how you are failing to communicate a convincing point. If you found my first comment “rude” or impolite then you’ve lost your grip on ordinary conversation. Saying “meh” is not rude, yikes.
OpenPhil notion of ‘accident risk’ more general than yours to describe the scenarios that aren’t misuse risk and their term makes perfect sense to me: https://www.openphilanthropy.org/blog/potential-risks-advanced-artificial-intelligence-philanthropic-opportunity
Yeah, well I don’t think we should only be talking about accident risk.
What do you have in mind? If it can’t be fixed with better programming, how will they be fixed?
Better decision theory, which is much of what MIRI does, and better guiding philosophy.
I agree that more of both is needed. Both need to be instantiated in actual code, though. And both are useless if researchers don’t care implement them.
I admit I would benefit from some clarification on your point—are you arguing that the article assumes a bug-free AI won’t cause AI accidents? Is it the case that this arose from Amodei et al.’s definition?: “unintended and harmful behavior that may emerge from poor design of real-world AI systems”. Poor design of real world AI systems isn’t limited to being bug-free, but I can see why this might have caused confusion.
I’m not—I’m saying that when you phrase it as accidents then it creates flawed perceptions about the nature and scope of the problem. An accident sounds like a onetime event that a system causes in the course of its performance; AI risk is about systems whose performance itself is fundamentally destructive. Accidents are aberrations from normal system behavior; the core idea of AI risk is that any known specification of system behavior, when followed comprehensively by advanced AI, is not going to work.