I made this into two posts, my first LessWrong posts:
Keeping content out of LLM training datasets
Should we exclude alignment research from LLM training datasets?
Current theme: default
Less Wrong (text)
Less Wrong (link)
I made this into two posts, my first LessWrong posts:
Keeping content out of LLM training datasets
Should we exclude alignment research from LLM training datasets?