I think that chapter in the Precipice is really good, but it’s not exactly the sort of thing I have in mind.
Although Toby’s less optimistic than I am, he’s still only arguing for a 10% probability of existentially bad outcomes from misalignment.* The argument in the chapter is also, by necessity, relatively cursory. It’s aiming to introduce the field of artificial intelligence and the concept of AGI to readers who might be unfamiliar with it, explain what misalignment risk is, make the idea vivid to readers, clarify misconceptions, describe the state of expert opinion, and add in various other nuances all within the span of about fifteen pages. I think that it succeeds very well in what it’s aiming to do, but I would say that it’s aiming for something fairly different.
*Technically, if I remember correctly, it’s a 10% probability within the next century. So the implied overall probability is at least somewhat higher.
I think that chapter in the Precipice is really good, but it’s not exactly the sort of thing I have in mind.
Although Toby’s less optimistic than I am, he’s still only arguing for a 10% probability of existentially bad outcomes from misalignment.* The argument in the chapter is also, by necessity, relatively cursory. It’s aiming to introduce the field of artificial intelligence and the concept of AGI to readers who might be unfamiliar with it, explain what misalignment risk is, make the idea vivid to readers, clarify misconceptions, describe the state of expert opinion, and add in various other nuances all within the span of about fifteen pages. I think that it succeeds very well in what it’s aiming to do, but I would say that it’s aiming for something fairly different.
*Technically, if I remember correctly, it’s a 10% probability within the next century. So the implied overall probability is at least somewhat higher.
I see, thanks for the explanation!