I don’t think any of the big AI labs have overcome that prior, but I also have the prior that their safety plans don’t even make sense theoretically—hence the “burden of proof” is on them to show that it is possible to align the kind of AI they are building. Another thing pointing in the opposite direction.
I don’t think any of the big AI labs have overcome that prior, but I also have the prior that their safety plans don’t even make sense theoretically—hence the “burden of proof” is on them to show that it is possible to align the kind of AI they are building. Another thing pointing in the opposite direction.