I think that there’s a good chance that a leading, careful AI project could be a huge force for good, substantially reducing existential risk
I think the burden of proof should be on the big AI companies to show that this is actually a possibility. Because right now, the technology, as based on the current paradigm, looks like it’s fundamentallyuncontrollable.
I don’t think any of the big AI labs have overcome that prior, but I also have the prior that their safety plans don’t even make sense theoretically—hence the “burden of proof” is on them to show that it is possible to align the kind of AI they are building. Another thing pointing in the opposite direction.
I think the burden of proof should be on the big AI companies to show that this is actually a possibility. Because right now, the technology, as based on the current paradigm, looks like it’s fundamentally uncontrollable.
TL;DR: I don’t like talking about “burden of proof”
I prefer talking about “priors”.
Seems like you ( @Greg_Colbourn ) have priors that AI labs will cause damage, and I’d assume @Benjamin Hilton would agree with that?
I also guess you both have priors that ~random (average) capabilities research will be net negative?
If so, I suggest we should ask if the AI lab (or the specific capabilities research) has overcome that prior somehow.
wdyt?
I don’t think any of the big AI labs have overcome that prior, but I also have the prior that their safety plans don’t even make sense theoretically—hence the “burden of proof” is on them to show that it is possible to align the kind of AI they are building. Another thing pointing in the opposite direction.
Whoever downvoted this, I’d really prefer if you tell me why
You can do it anonymously:
https://docs.google.com/forms/d/e/1FAIpQLSca6NOTbFMU9BBQBYHecUfjPsxhGbzzlFO5BNNR1AIXZjpvcw/viewform