So, I’ll give two more examples of how burden of proof gets used typically:
You claim that you just saw a unicorn ride past. I say that the burden of proof is on you to prove it, as unicorns do not exist (as far as we know).
As prime minister, you try and combat obesity by taxing people in proportion to their weight. I say that the burden of proof is on you to prove that such a policy would do more good than harm.
I think in both these cases, the statements made are quite reasonable. Let me try to translate the objections into your language:
my prior of you seeing a unicorn is extremely low, because unicorns do not exist (as far as we know)
My prior of this policy being a good idea is low, because most potential interventions are not helpful.
These are fine, but I’m not sure I prefer either of these. It seems like the other party can just say “well my priors are high, so I guess both our beliefs are equally valid”.
I think “burden of proof” translates to “you should provide a lot of proof for your position in order for me or anyone else to believe you”. It’s a statement of what peoples priors should be.
“We should avoid building more powerful AI because it might kill us all” breaks to
No prior AI system has tried to kill us all
We are not sure how powerful a system we can really make scaling known techniques and adjacent to known techniques in the next 10-20 years. A system 20 years from now might not actually be “AGI” we don’t know.
This sounds like someone should have the burden of proof of showing near future AI systems are (1) lethal (2) powerful in a utility way, not just a trick but actually effective at real world tasks
And like the absence of unicorns caught on film someone could argue that 1⁄2 are unlikely by prior due to AI hype that did not pan out.
The counter argument seems to be “we should pause now, I don’t have to prove anything because an AI system might be so smart it can defeat any obstacles even though I don’t know how it could do that, it will be so smart it finds a way”. Or “by the time there is proof we will be about to die”.
So, I’ll give two more examples of how burden of proof gets used typically:
You claim that you just saw a unicorn ride past. I say that the burden of proof is on you to prove it, as unicorns do not exist (as far as we know).
As prime minister, you try and combat obesity by taxing people in proportion to their weight. I say that the burden of proof is on you to prove that such a policy would do more good than harm.
I think in both these cases, the statements made are quite reasonable. Let me try to translate the objections into your language:
my prior of you seeing a unicorn is extremely low, because unicorns do not exist (as far as we know)
My prior of this policy being a good idea is low, because most potential interventions are not helpful.
These are fine, but I’m not sure I prefer either of these. It seems like the other party can just say “well my priors are high, so I guess both our beliefs are equally valid”.
I think “burden of proof” translates to “you should provide a lot of proof for your position in order for me or anyone else to believe you”. It’s a statement of what peoples priors should be.
Why doesn’t this translate to AI risk.
“We should avoid building more powerful AI because it might kill us all” breaks to
No prior AI system has tried to kill us all
We are not sure how powerful a system we can really make scaling known techniques and adjacent to known techniques in the next 10-20 years. A system 20 years from now might not actually be “AGI” we don’t know.
This sounds like someone should have the burden of proof of showing near future AI systems are (1) lethal (2) powerful in a utility way, not just a trick but actually effective at real world tasks
And like the absence of unicorns caught on film someone could argue that 1⁄2 are unlikely by prior due to AI hype that did not pan out.
The counter argument seems to be “we should pause now, I don’t have to prove anything because an AI system might be so smart it can defeat any obstacles even though I don’t know how it could do that, it will be so smart it finds a way”. Or “by the time there is proof we will be about to die”.