One worry that I have about them, is that they (at least the forecasting part of the economics one, and the psychology one) seem very focused on various adjustments to human judgement. In contrast, I think a much more urgent and tractable question is how to improve the judgement and epistemics of AI systems.
AI epistemics seems like an important area to me both because it helps with AI safety, and because I expect that it’s likely to be the main epistemic enhancement we’ll get in the next 20 years or so.
As a very simple example, I think Amanda Askell stands out to me as someone who used to work on Philosophy, then shifted to ML work, where she now seems to be doing important work on crafting the personality of Claude. I think Claude easily has 100k direct users (more through the API) now and I expect that to expand a lot.
In general, I’m both suspicious of human intellectuals (for reasons outlined in the above linked post), and I’m suspicious of our ability to improve human intellectuals. On the latter, it’s just very expensive to train humans to adapt new practices or methods. It’s obviously insanely expensive to train humans in any complex topic like Bayesian Statistics.
Meanwhile, LLM setups are rapidly improving, and arguably much more straightforward to improve. There’s of course one challenge of actually getting the right LLM companies to incorporate recommended practices, but my guess is that this is often much easier than training humans. You could also just build epistemic tools on top of LLMs, though these would generally target fewer people.
I have a lot of uncertainty if AI is likely to be an existential risk. But I have a lot more certainty that AI is improving quickly and will become a more critical epistemic tool than it is now. It’s also just far easier to study than studying humans.
Happy to discuss / chat if that could ever be useful!
Happy to see progress on these.
One worry that I have about them, is that they (at least the forecasting part of the economics one, and the psychology one) seem very focused on various adjustments to human judgement. In contrast, I think a much more urgent and tractable question is how to improve the judgement and epistemics of AI systems.
I’ve written a bit more here.
AI epistemics seems like an important area to me both because it helps with AI safety, and because I expect that it’s likely to be the main epistemic enhancement we’ll get in the next 20 years or so.
Thanks, Ozzie! This is interesting. There could well be something there. Could you say more about what you have in mind?
As a very simple example, I think Amanda Askell stands out to me as someone who used to work on Philosophy, then shifted to ML work, where she now seems to be doing important work on crafting the personality of Claude. I think Claude easily has 100k direct users (more through the API) now and I expect that to expand a lot.
There’s been some investigations into trying to get LLMs to be truthful:
https://arxiv.org/abs/2110.06674
And of course, LLMs have shown promise at forecasting:
https://arxiv.org/abs/2402.18563
In general, I’m both suspicious of human intellectuals (for reasons outlined in the above linked post), and I’m suspicious of our ability to improve human intellectuals. On the latter, it’s just very expensive to train humans to adapt new practices or methods. It’s obviously insanely expensive to train humans in any complex topic like Bayesian Statistics.
Meanwhile, LLM setups are rapidly improving, and arguably much more straightforward to improve. There’s of course one challenge of actually getting the right LLM companies to incorporate recommended practices, but my guess is that this is often much easier than training humans. You could also just build epistemic tools on top of LLMs, though these would generally target fewer people.
I have a lot of uncertainty if AI is likely to be an existential risk. But I have a lot more certainty that AI is improving quickly and will become a more critical epistemic tool than it is now. It’s also just far easier to study than studying humans.
Happy to discuss / chat if that could ever be useful!