There’s an adjacent take I agree which is more like: 1. AI will likely create many high-stakes decisions and a confusing environment 2. The situation would be better if we could use AI to stay in-step with AI progress on our ability to figure stuff out
3. rather than waiting until the world is very confusing, maybe we should use AIs right now to do some kinds of intellectual writing, in ways we expect to improve as AIs improve (even if AI development isn’t optimising for intellectual writing).
I think this could look a bit like company with mostly AI workers that produces writing on a bunch of topics, or as a first step, heavily LM written (but still high-quality) substack.
There’s an adjacent take I agree which is more like:
1. AI will likely create many high-stakes decisions and a confusing environment
2. The situation would be better if we could use AI to stay in-step with AI progress on our ability to figure stuff out
3. rather than waiting until the world is very confusing, maybe we should use AIs right now to do some kinds of intellectual writing, in ways we expect to improve as AIs improve (even if AI development isn’t optimising for intellectual writing).
I think this could look a bit like company with mostly AI workers that produces writing on a bunch of topics, or as a first step, heavily LM written (but still high-quality) substack.