My guess is that he meant the sequences convey the kind of more foundational epistemology which helps people people derive better models on subjects like AI Alignment by themselves, though all of the sequences in The Machine in the Ghost and Mere Goodness have direct object-level relevance.
Excepting Ngo’s AGI safety from first principles, I don’t especially like most of those resources as introductions exactly because they offer readers very little opportunity to test or build on their beliefs. Also, I think most of them are substantially wrong. (Concrete Problems in AI Safety seems fine, but is also skipping a lot of steps. I haven’t read Unsolved Problems in ML Safety.)
My guess is that he meant the sequences convey the kind of more foundational epistemology which helps people people derive better models on subjects like AI Alignment by themselves, though all of the sequences in The Machine in the Ghost and Mere Goodness have direct object-level relevance.
Excepting Ngo’s AGI safety from first principles, I don’t especially like most of those resources as introductions exactly because they offer readers very little opportunity to test or build on their beliefs. Also, I think most of them are substantially wrong. (Concrete Problems in AI Safety seems fine, but is also skipping a lot of steps. I haven’t read Unsolved Problems in ML Safety.)