Should we lobby governments to impose a moratorium on AI research? Since we don’t enforce pauses on most new technologies, I hope the reader will grant that the burden of proof is on those who advocate for such a moratorium.
I agree that the question of “what priors to use here” is super important.
For example, if someone would chose priors for “we usually don’t bring new more intelligent life forms to live with us, so the burden of proof is on doing so”—would that be valid?
Or if someone would say “we usually don’t enforce pauses on writing new computer programs”—would THAT be valid?
imo: the question of “what priors to use” is important and not trivial. I agree with @Holly_Elmore that just assuming the priors here is skipping over some important stuff. But I disagree that “you could have stopped here”, since there might be things which I could use to update my own (different) prior
In a debate, which is what was supposed to be happening, the point is to make claims that either support or refute the central claim. That’s what Holly was pointing out—this is a fundamental requirement for accepting Nora’s position. (I don’t think that this is the only crux—“AI Safety is gonna be easy” and “AI is fully understandable” are two far larger cruxes, but they largely depend on this first one.)
You could have stopped here. This is our crux.
I agree that the question of “what priors to use here” is super important.
For example, if someone would chose priors for “we usually don’t bring new more intelligent life forms to live with us, so the burden of proof is on doing so”—would that be valid?
Or if someone would say “we usually don’t enforce pauses on writing new computer programs”—would THAT be valid?
imo: the question of “what priors to use” is important and not trivial. I agree with @Holly_Elmore that just assuming the priors here is skipping over some important stuff. But I disagree that “you could have stopped here”, since there might be things which I could use to update my own (different) prior
*As far as my essay (not posted yet) was concerned, she could have stopped there, because this is our crux.
In a debate, which is what was supposed to be happening, the point is to make claims that either support or refute the central claim. That’s what Holly was pointing out—this is a fundamental requirement for accepting Nora’s position. (I don’t think that this is the only crux—“AI Safety is gonna be easy” and “AI is fully understandable” are two far larger cruxes, but they largely depend on this first one.)