I was just talking about this with some friends! Has anyone trained a GPT-3 language model on LessWrong or EA posts and then had it create or try to distill some posts? I think this would be mostly entertaining, kinda like the postmodern philosophy generator or subreddit simulator. But it seems like a win-win regardless of whether it is a good or bad distiller.
If it is bad, it could impart some valuable lessons about recognizing vacuous gpt-3 generated ideas.
If it is good, then maybe it could really distill some ideas well or generate new ones (doubtful atm?).
Also there could be a monthly award for whoever predicts which post is AI generated :).
I was just talking about this with some friends! Has anyone trained a GPT-3 language model on LessWrong or EA posts and then had it create or try to distill some posts? I think this would be mostly entertaining, kinda like the postmodern philosophy generator or subreddit simulator. But it seems like a win-win regardless of whether it is a good or bad distiller.
If it is bad, it could impart some valuable lessons about recognizing vacuous gpt-3 generated ideas. If it is good, then maybe it could really distill some ideas well or generate new ones (doubtful atm?).
Also there could be a monthly award for whoever predicts which post is AI generated :).