The point I wanted to make in the short form was directed at a particular brand of skeptic.
When I said,
Something has gone wrong if people think pausing can only make sense if the risks of AI ruin are >50%.
I didn’t mean to imply that anyone who opposes pausing would consider >50% ruin levels their crux.
Likewise, I didn’t mean to imply that “let’s grant 5% risk levels” is something that every skeptic would go along with (but good that your comment is making this explicit!).
For what it’s worth, if I had to give a range for how much I think people I, at the moment, epistemically respect to the highest extent possible, can disagree on this question today (June 2024), I would probably not include credences <<5% in that range (I’d maybe put it a more like 15-90%?). (This is of course subject to change if I encounter surprisingly good arguments for something outside the range.) But that’s a separate(!) discussion, separate from the conditional statement that I wanted to argue for in my short form. (Obviously, other people will draw the line elsewhere.)
On the 80k article, I think it aged less well than what one maybe could’ve written at the time, but it was written at a time when AI risk concerns still seemed fringe. So, just because it in my view didn’t age amazingly doesn’t mean that it was unreasonable at the time. At the time, I’d have probably called it “lower than what I would give, but seems within the range of what I consider reasonable.”
The point I wanted to make in the short form was directed at a particular brand of skeptic.
When I said,
I didn’t mean to imply that anyone who opposes pausing would consider >50% ruin levels their crux.
Likewise, I didn’t mean to imply that “let’s grant 5% risk levels” is something that every skeptic would go along with (but good that your comment is making this explicit!).
For what it’s worth, if I had to give a range for how much I think people I, at the moment, epistemically respect to the highest extent possible, can disagree on this question today (June 2024), I would probably not include credences <<5% in that range (I’d maybe put it a more like 15-90%?). (This is of course subject to change if I encounter surprisingly good arguments for something outside the range.) But that’s a separate(!) discussion, separate from the conditional statement that I wanted to argue for in my short form. (Obviously, other people will draw the line elsewhere.)
On the 80k article, I think it aged less well than what one maybe could’ve written at the time, but it was written at a time when AI risk concerns still seemed fringe. So, just because it in my view didn’t age amazingly doesn’t mean that it was unreasonable at the time. At the time, I’d have probably called it “lower than what I would give, but seems within the range of what I consider reasonable.”