“The retreat lasted from Friday evening to Sunday afternoon and had 12 participants from UCLA, Harvard, UCI, and UC Berkeley. There was a 1:3 ratio of grad students to undergrads”
So it was 9 undergrads and 3 grads interested in AI safety? This sounds like a biased sample. Not one postdoc, industry researcher, or PI?
To properly evaluate timelines, I think you should have some older more experienced folks, and not just select AI safety enthusiasts, which biases your sample of people towards those with shorter timelines.
How many participants have actually developed AI systems for a real world application? How many have developed an AI system for a non-trivial application? In my experience many people working in AI safety have very little experience with real-world AI development, and many I have seen have none whatsoever. That isn’t good when it comes to gauging timelines, I think. When you get into the weeds and learn “how the sausage is made” to create AI systems, (ie true object level understanding), I think it makes you more pessimistic on timelines for valid reasons. For one thing, you are exposed to weird unexplainable failure modes which are not published or publicized.
“The retreat lasted from Friday evening to Sunday afternoon and had 12 participants from UCLA, Harvard, UCI, and UC Berkeley. There was a 1:3 ratio of grad students to undergrads”
So it was 9 undergrads and 3 grads interested in AI safety? This sounds like a biased sample. Not one postdoc, industry researcher, or PI?
To properly evaluate timelines, I think you should have some older more experienced folks, and not just select AI safety enthusiasts, which biases your sample of people towards those with shorter timelines.
How many participants have actually developed AI systems for a real world application? How many have developed an AI system for a non-trivial application? In my experience many people working in AI safety have very little experience with real-world AI development, and many I have seen have none whatsoever. That isn’t good when it comes to gauging timelines, I think. When you get into the weeds and learn “how the sausage is made” to create AI systems, (ie true object level understanding), I think it makes you more pessimistic on timelines for valid reasons. For one thing, you are exposed to weird unexplainable failure modes which are not published or publicized.