I was mostly skeptical because the people involved did not seem to have any experience doing any kind of AI Alignment research, or themselves had the technical background they were trying to teach. I think this caused them to focus on the obvious things to teach, instead of the things that are actually useful.
To be clear, I have broadly positive impressions of Toon and think the project had promise, just that the team didn’t actually have the skills to execute on it, which I think few people have.
I was mostly skeptical because the people involved did not seem to have any experience doing any kind of AI Alignment research, or themselves had the technical background they were trying to teach. I think this caused them to focus on the obvious things to teach, instead of the things that are actually useful.
To be clear, I have broadly positive impressions of Toon and think the project had promise, just that the team didn’t actually have the skills to execute on it, which I think few people have.