Please correct me if I’m misunderstanding you but this idea seems to follow from a chain of logic that goes like this:
We need more widely-read, high-quality writing on AI risk.
Therefore, we need a large quantity of writing on AI risk.
We can use LLMs to help produce this large quantity.
I disagree with #2. It’s sufficient to make a smaller amount of really good content and distribute it widely. I think right now the bottleneck isn’t a lack of content for public consumption, it’s a lack of high-quality content.
And I appreciate some of the efforts to fix this, for example Existential Risk Observatory has written some articles in national magazines, MIRI is developing some new public materials, and there’s a documentary in the works. I think those are the sorts of things we need. I don’t think AI is good enough to produce content at the level of quality that I expect/hope those groups will achieve.
(Take this comment as a weak endorsement of those three things but not a strong endorsement. I think they’re doing the right kinds of things; I’m not strongly confident that the results will be high quality, but I hope they will be.)
Although, I do agree with you that LLMs can speed up writing, and you can make the writing high-quality as long as there’s enough human oversight. (TBH I am not sure how to do this myself, I’ve tried but I always end up writing ~everything by hand. But many people have had success with LLM-assisted writing.)
There’s an adjacent take I agree which is more like: 1. AI will likely create many high-stakes decisions and a confusing environment 2. The situation would be better if we could use AI to stay in-step with AI progress on our ability to figure stuff out
3. rather than waiting until the world is very confusing, maybe we should use AIs right now to do some kinds of intellectual writing, in ways we expect to improve as AIs improve (even if AI development isn’t optimising for intellectual writing).
I think this could look a bit like company with mostly AI workers that produces writing on a bunch of topics, or as a first step, heavily LM written (but still high-quality) substack.
If you want to reach a very wide audience the N times they need to read and think about and internalize the message you can either write N pieces that reach that whole audience or N×y pieces that reach a portion of that audience. Generally, if you have the ability to efficiently write N×y pieces, then the latter is going to be easier than the former. This is what I mean about comms being a numbers game, and I take this to be pretty foundational to a lot of comms work in marketing, political campaigning, and beyond.
Though I also agree with Caleb’s adjacent take, largely because if you can build an AI company then you can create greater coverage for your idea, arguments, or data pursuant to the above.
Of course there’s large and there’s large. We may well disagree about how good LLMs are at writing. I think Claude is about 90th percentile as compared to tech journalists in terms of factfulness, clarity, and style.
You could instead or in addition do a bunch of paid advertising to get writing in front of everyone. I think that’s a good idea too, but there are also risks here like the problems that faces WWOTF’s advertising when some people saw the same thing 10 times and were annoyed.
Please correct me if I’m misunderstanding you but this idea seems to follow from a chain of logic that goes like this:
We need more widely-read, high-quality writing on AI risk.
Therefore, we need a large quantity of writing on AI risk.
We can use LLMs to help produce this large quantity.
I disagree with #2. It’s sufficient to make a smaller amount of really good content and distribute it widely. I think right now the bottleneck isn’t a lack of content for public consumption, it’s a lack of high-quality content.
And I appreciate some of the efforts to fix this, for example Existential Risk Observatory has written some articles in national magazines, MIRI is developing some new public materials, and there’s a documentary in the works. I think those are the sorts of things we need. I don’t think AI is good enough to produce content at the level of quality that I expect/hope those groups will achieve.
(Take this comment as a weak endorsement of those three things but not a strong endorsement. I think they’re doing the right kinds of things; I’m not strongly confident that the results will be high quality, but I hope they will be.)
Although, I do agree with you that LLMs can speed up writing, and you can make the writing high-quality as long as there’s enough human oversight. (TBH I am not sure how to do this myself, I’ve tried but I always end up writing ~everything by hand. But many people have had success with LLM-assisted writing.)
There’s an adjacent take I agree which is more like:
1. AI will likely create many high-stakes decisions and a confusing environment
2. The situation would be better if we could use AI to stay in-step with AI progress on our ability to figure stuff out
3. rather than waiting until the world is very confusing, maybe we should use AIs right now to do some kinds of intellectual writing, in ways we expect to improve as AIs improve (even if AI development isn’t optimising for intellectual writing).
I think this could look a bit like company with mostly AI workers that produces writing on a bunch of topics, or as a first step, heavily LM written (but still high-quality) substack.
If you want to reach a very wide audience the N times they need to read and think about and internalize the message you can either write N pieces that reach that whole audience or N×y pieces that reach a portion of that audience. Generally, if you have the ability to efficiently write N×y pieces, then the latter is going to be easier than the former. This is what I mean about comms being a numbers game, and I take this to be pretty foundational to a lot of comms work in marketing, political campaigning, and beyond.
Though I also agree with Caleb’s adjacent take, largely because if you can build an AI company then you can create greater coverage for your idea, arguments, or data pursuant to the above.
Of course there’s large and there’s large. We may well disagree about how good LLMs are at writing. I think Claude is about 90th percentile as compared to tech journalists in terms of factfulness, clarity, and style.
You could instead or in addition do a bunch of paid advertising to get writing in front of everyone. I think that’s a good idea too, but there are also risks here like the problems that faces WWOTF’s advertising when some people saw the same thing 10 times and were annoyed.