I guess the overall point for me is that if the goal is just to speculate about what much more capable and accurate LLMs might enable, then what’s the point of doing a small, uncontrolled, empirical study demonstrating that current LLMs are not, in fact, that kind of risk?
I guess the overall point for me is that if the goal is just to speculate about what much more capable and accurate LLMs might enable, then what’s the point of doing a small, uncontrolled, empirical study demonstrating that current LLMs are not, in fact, that kind of risk?