Is there any possibility of the forum having an AI-writing detector in the background which perhaps only the admins can see, but could be queried by suspicious users? I really don’t like AI writing and have called it out a number of times but have been wrong once. I imagine this has been thought about and there might even be a form of this going on already.
In saying this my first post on LessWrong was scrapped because they identified it as AI written even though I have NEVER used AI in online writing not even for checking/polishing. So that system obviously isnt’ perfect.
I feel mixed about ai-writing detection for a few reasons. I have very few issues with someone putting the bullet points of their argument into ai and then reading/editing/discussing a few times the response and letting the ai write it. I also think there is value in just putting your messy thoughts out as you have them and not having everything polished, but it depends on the situation.
Also separately I’m worried ai-writing detector proliferation will just speed up “immunity”. I don’t think there is something deep and fundamental that stops ai from writing e.g. exactly what I have written to this point. You can already download all your writing, ask ai to summarize it, make a text file that precisely describes your style and then ask the ai to write something in your voice. I’ve done this and yes they still have a bit of that vanilla llm feel but if there is actual market demand for solutions it doesn’t seem like this is an insurmountable problem.
I think people should say when they used ai and to what degree and there should be an expectation that just because polished writing is cheaper than it used to be you will not pollute the forum with things that you have not thought an appropriate amount about.
Wow I’m almost polar opposite I think—the writing world you envisage feels sad and a bit scary to me. I would want close to zero AI involvement in the final draft. I think there’s far far more value in messy thoughts out there, than bullet points which an AI “expands” on to a polished final product. AI in its current state inevitably changes or measures arguments. It also makes writing feel more “voiceless” and samey.
I want to engage with humans here on the forum. Someone’s voice is an extention of them. When we type on the keyboard its coming almost directly from our brains. Brains to hands to brains. Sure we’re not face to face or on zoom but when its just my words and your words I like that the conversation is direct. There is soul there. The more AI gets in the way, the less I feel we have a deep discourse.
I’m very OK with people using AI for brainstorming, researching, and testing arguments. But let every word of your final draft come straight from you. Your heart, your voice with all your quirks and problems.
I might do a poll to see what people think about this. I might be in the minority.
Curious to see that poll. I’m in that minority too.
But I do wonder how much the “someone’s voice is an extension of them” view is mediated by the privilege of being effortlessly able to articulate one’s thoughts in public, especially in a forum that invites scrutiny such as this one, and reliably get positive engagement. You and Brad seem to be on opposite ends of this spectrum (?). Your combination of prolificity, quality, and the fact that you do this despite having an obscenely busy day job reminds me of Scott Alexander, cf. this AMA exchange from back when he was a full-time psychiatrist:
How do you write so quickly? I find it takes me a dozen or more hours to write anything as thorough as one of your blog posts. (It’s possible that I’m just unusually slow).
Scott: I guess I don’t really understand why it takes so many people so long to write. They seem to be able to talk instantaneously, and writing isn’t that different from speech. Why can’t they just say what they want to say, but instead of speaking it aloud, write it down?
(Yeah, that level of clear thinking to clear writing translation is an insane privilege.)
On the other hand Brad’s reply to you reminds me of my younger self. I was horrible at this, and worked my rear off for years to get to essentially the starting point of my more innately articulate peers who sailed through job interviews, scholarship interviews etc I kept bombing out of. You can tell how much I care about this by the fact that I could link to a throwaway comment above from deep within the chat threads of an 8-year old reddit AMA by someone mentioning a thing they had that I didn’t. I can definitely see younger me being in the majority of your poll.
I think this has to do with the fact that I think mostly nonverbally, which makes the thought to writing / speech translation much harder. I suspect vast swathes of the population are similar. (The wordcel vs rotator thing is related, although I dislike the discourse around it.) This makes us, relatively speaking, voiceless in public fora, so discourse gets dominated by verbal thinkers which skews the intellectual environment and culture.
Going back and forth with AI, reviewing, and drafting can turn a writing process that might take several days to a week or more, into an hour or two, or less
I went “yeah definitely for nonverbal-ish thinkers, and I think this has the potential to reduce the skew and improve intellectual variety in discourse and culture, and separately I expect verbal-ish thinkers won’t appreciate this benefit” and sure enough your reply confirmed the latter.
That said, I do mostly agree with you that I haven’t been very impressed by the heavily AI-assisted writings I’ve seen, and like you I really dislike “AI voice”, so to me this has been more potential than realised benefit so far. Some guesses:
I’m wrong about the above
AI isn’t good enough yet to properly bridge the translation gap between heavily nonverbal-infused thinking and and writing. (Or it is but people aren’t using the paid versions)
Nonverbal-ish thinkers just don’t reason as clearly as they think they do, and they never noticed this because unlike verbal-ish thinkers they hadn’t often translated their thoughts into writing which exposes thinking gaps, and AI-assisted writeups of their half-baked thoughts fill in those gaps with slop
The written word isn’t the right translation target for nonverbal-ish thinking, it’s something else, and (more advanced than today’s) AI can potentially assist with this too. I’m thinking of Bret Victor’s humane dynamic medium, dangit I should’ve just quoted these sections instead of subjecting you to my rambling:
A way in which people conceive and share thoughts. An idea might be expressed as a speech, a song, a drawing, a video, an essay, an equation, a tweet… These are different media.
Certain media open up new threads of thought that are otherwise inconceivable. Greek drama was made possible by writing; Shakespearean drama was made possible by print; Newtonian physics was made possible by equations.
The deepest effects are realized when a medium is diffused throughout a culture, not in the hands of a select few. A literate society is one in which all people participate in the exchange of written ideas, where the visual organization of words is second nature in the cultural consciousness. Societies with designated scribes do not enjoy the most significant benefits of literacy.
The conceiving and sharing of ideas represented computationally.
Computers can be used for efficiently distributing static media, as when reading an article or watching a video. But by “dynamic medium”, we mean the representation of ideas in which computation is essential, by enabling active exploration of implications and possibilities.
The modern world is shaped by vast complex systems — technical systems, environmental systems, societal systems — which cannot be clearly seen nor deeply understood via non-dynamic media. The dynamic medium may enable humanity to grasp and grapple with this century’s most critical ideas.
A dynamic medium which is communal, gives all people full agency, and is part of the real world. [more]
By “communal”, we mean bringing people together in the same physical space, with a medium that supports and strengthens face-to-face interaction, shared hands-on work, tacit knowledge, mutual context, and generally being present in the same reality.
By “agency”, we mean a person’s ability and confidence to view, change, extend, and remake every aspect of a system that they rely on, especially for fluently exploring new ideas and improvising solutions in unique situations. In the case of computing systems, this implies top-to-bottom programmability and composability, in a form that is accessible and human-scale.
By “real world”, we mean that material in the medium physically exists, and all of our human abilities and human senses can be applied to it. People are free to make use of their whole selves, every feature of their physical body and of the physical world, instead of interacting with a simulation through an interface.
“Real world” also refers to being situated in reality — understanding what’s actually happening and how things actually work instead of just abstractions; awareness of larger contexts — and especially the local reality of local needs and local knowledge rather than top-down centralized mass-produced solutions.
I think the problem is fundamentally the lack of care and attention to the content being created, not whether or not AI is used. If it is in people’s incentives to produce polished, thoughtless, drivel on LinkedIn and they can do it in 10 seconds, they will.
This is very different from an iterative process in which the human is carefully examining the output and refining to optimize the exploration and explanation of an idea.
I think folks should be free to choose what level of AI use they’re comfortable with, without fear of being shamed or outed inherently for using AI. Those who don’t approve are free to downvote or ‘x’ to show their discontent. If enough people feel the same way then the incentives will do their job.
As I said on LinkedIn, you are still engaging with a human… a human that’s using AI.
I see AI as an interesting leveller of the playing field. It gives people less room to dismiss a point of view for being conveyed haphazardly in writing.
Writing well is a skill, built on the existential privilege of intelligence. If you don’t have it, does it mean you have less right to be heard?
Although there are tradeoffs associated with AI writing, mostly being able to produce content that can appear polished and well-considered when it is not, I think AI’s enabling the proliferation of good thoughts and ideas that would otherwise just never happen far outweighs this.
Going back and forth with AI, reviewing, and drafting can turn a writing process that might take several days to a week or more, into an hour or two, or less. This enables me, and I’m sure others, to share content and ideas that otherwise we would not be able to.
Removing the barriers to people sharing their thoughts quickly and effectively is probably how we get more new and impactful ideas out there. I’ve been pretty sad at the sort of witch-huntery I’ve been seeing about AI generated content.
I agree that It may enable you to share ideas a little faster (although I’m not sure by how much). Most individual good ideas could be expressed in a couple of paragraphs if need be.
I don’t buy though that you “wouldn’t be able to share them” otherwise. I’m happy for AI to help with your thoughts and ideas (brainstorming, ideating, research), just not with your final writing. I’m not convinced at all yet that AI is “enabling the proliferation of good thoughts and ideas” in a significant way. Can you share any evidence of that? I’ve not been very impresses with posts on the forum here that heavily use AI
I don’t think writing the final draft without AI is a huge barrier to sharing thoughts quickly and effectively. Insofar as it might be, I’d take the tradeoff the other way.
Its interesting that this is so polarising. I’m certainly one of those witch hunters at the moment at least. A year ago I was more OK with AI writing but I’m now vehemently against it after seeing Linkedin, which 2 years ago was a pretty interesting platform, deteriorate to low quality discourse full of AI slop in both the posts and the comments. On that platform at least it has lowered the quality of ideas and discourse, not improved them. I hope Substack doesn’t go the same way.
I have experience writing things with and without AI. At least for me, it can be a very difficult process trying to convey things as clearly and effectively as I can. Perhaps I am being unreasonable in putting that much time into the process and perhaps other people are just much better at writing clearly and effectively without AI. But I can say that I would not produce a lot of the content that I produce without AI being able to shorten the process significantly.
It wasn’t a very well written comment, was a bit benign and generic which is maybe why it ot flaggerd. Here it is below To their credit though they reinstated it.
”This seems to be a nice observational study which analyses already available data, with an interesting and potentially important finding.
They didn’t do “controlling” in the technical sense of the word, they matched cases and controls on 40 baseline variables in the cohort with “demographics, 15 comorbidities, concomitant cardiometabolic drugs, laboratories, vitals, and health-care utilization”
The big caveat here is that these impressive observational findings often disappear, or become much smaller when a randomised controlled trial is done. Observational studies can never prove causation.Usually that is because there is some silent feature about the kind of people that use melatonin to sleep, that couldn’t be matched for or was missed in the matching. A speculative example here could be that some silent, unknown illnesses could have caused people to have poor sleep—which lead to melatonin use. Also what if poor sleep itself led to poor cardiovascular health not the melatonin itself?
This might be enough initial data to trigger a randomised placebo control trial using melatonin. It might be hard to sign enough people up to detect an effect on mortality—although a smaller study could still at least pick up if melatonin caused cardiovascular disease.
I agree with their conclusion which I think is a great takeaway
”These findings challenge the perception of melatonin as a benign chronic therapy and underscore the need for randomized trials to clarify its cardiovascular safety profile.”
“
This is the pangram result
This was the lesswong rejection.
Literally just cranked out a 2 minute average quality comment and got accused of being a bot lol. Great introduction to the forum. To be fair they followed up well and promptly, but it was a bit annoying because it was days later and by that stage the thread had passed ant the comment was irrelevent.
Thanks for sharing! I’d have guessed they would be using something at least as good as pangram, but maybe it has too many false negatives for them, or it was rejected for other reasons and the wrong rejection message was shown.
Literally just cranked out a 2 minute average quality comment and got accused of being a bot lol. Great introduction to the forum. To be fair they followed up well and promptly, but it was a bit annoying because it was days later and by that stage the thread had passed ant the comment was irrelevent.
As an ex forum moderator I can sympathize with them, not a fun job!
Is there any possibility of the forum having an AI-writing detector in the background which perhaps only the admins can see, but could be queried by suspicious users? I really don’t like AI writing and have called it out a number of times but have been wrong once. I imagine this has been thought about and there might even be a form of this going on already.
In saying this my first post on LessWrong was scrapped because they identified it as AI written even though I have NEVER used AI in online writing not even for checking/polishing. So that system obviously isnt’ perfect.
I feel mixed about ai-writing detection for a few reasons. I have very few issues with someone putting the bullet points of their argument into ai and then reading/editing/discussing a few times the response and letting the ai write it. I also think there is value in just putting your messy thoughts out as you have them and not having everything polished, but it depends on the situation.
Also separately I’m worried ai-writing detector proliferation will just speed up “immunity”. I don’t think there is something deep and fundamental that stops ai from writing e.g. exactly what I have written to this point. You can already download all your writing, ask ai to summarize it, make a text file that precisely describes your style and then ask the ai to write something in your voice. I’ve done this and yes they still have a bit of that vanilla llm feel but if there is actual market demand for solutions it doesn’t seem like this is an insurmountable problem.
I think people should say when they used ai and to what degree and there should be an expectation that just because polished writing is cheaper than it used to be you will not pollute the forum with things that you have not thought an appropriate amount about.
Wow I’m almost polar opposite I think—the writing world you envisage feels sad and a bit scary to me. I would want close to zero AI involvement in the final draft. I think there’s far far more value in messy thoughts out there, than bullet points which an AI “expands” on to a polished final product. AI in its current state inevitably changes or measures arguments. It also makes writing feel more “voiceless” and samey.
I want to engage with humans here on the forum. Someone’s voice is an extention of them. When we type on the keyboard its coming almost directly from our brains. Brains to hands to brains. Sure we’re not face to face or on zoom but when its just my words and your words I like that the conversation is direct. There is soul there. The more AI gets in the way, the less I feel we have a deep discourse.
I’m very OK with people using AI for brainstorming, researching, and testing arguments. But let every word of your final draft come straight from you. Your heart, your voice with all your quirks and problems.
I might do a poll to see what people think about this. I might be in the minority.
Curious to see that poll. I’m in that minority too.
But I do wonder how much the “someone’s voice is an extension of them” view is mediated by the privilege of being effortlessly able to articulate one’s thoughts in public, especially in a forum that invites scrutiny such as this one, and reliably get positive engagement. You and Brad seem to be on opposite ends of this spectrum (?). Your combination of prolificity, quality, and the fact that you do this despite having an obscenely busy day job reminds me of Scott Alexander, cf. this AMA exchange from back when he was a full-time psychiatrist:
(Yeah, that level of clear thinking to clear writing translation is an insane privilege.)
On the other hand Brad’s reply to you reminds me of my younger self. I was horrible at this, and worked my rear off for years to get to essentially the starting point of my more innately articulate peers who sailed through job interviews, scholarship interviews etc I kept bombing out of. You can tell how much I care about this by the fact that I could link to a throwaway comment above from deep within the chat threads of an 8-year old reddit AMA by someone mentioning a thing they had that I didn’t. I can definitely see younger me being in the majority of your poll.
I think this has to do with the fact that I think mostly nonverbally, which makes the thought to writing / speech translation much harder. I suspect vast swathes of the population are similar. (The wordcel vs rotator thing is related, although I dislike the discourse around it.) This makes us, relatively speaking, voiceless in public fora, so discourse gets dominated by verbal thinkers which skews the intellectual environment and culture.
So when Brad said
I went “yeah definitely for nonverbal-ish thinkers, and I think this has the potential to reduce the skew and improve intellectual variety in discourse and culture, and separately I expect verbal-ish thinkers won’t appreciate this benefit” and sure enough your reply confirmed the latter.
That said, I do mostly agree with you that I haven’t been very impressed by the heavily AI-assisted writings I’ve seen, and like you I really dislike “AI voice”, so to me this has been more potential than realised benefit so far. Some guesses:
I’m wrong about the above
AI isn’t good enough yet to properly bridge the translation gap between heavily nonverbal-infused thinking and and writing. (Or it is but people aren’t using the paid versions)
Nonverbal-ish thinkers just don’t reason as clearly as they think they do, and they never noticed this because unlike verbal-ish thinkers they hadn’t often translated their thoughts into writing which exposes thinking gaps, and AI-assisted writeups of their half-baked thoughts fill in those gaps with slop
The written word isn’t the right translation target for nonverbal-ish thinking, it’s something else, and (more advanced than today’s) AI can potentially assist with this too. I’m thinking of Bret Victor’s humane dynamic medium, dangit I should’ve just quoted these sections instead of subjecting you to my rambling:
I think the problem is fundamentally the lack of care and attention to the content being created, not whether or not AI is used. If it is in people’s incentives to produce polished, thoughtless, drivel on LinkedIn and they can do it in 10 seconds, they will.
This is very different from an iterative process in which the human is carefully examining the output and refining to optimize the exploration and explanation of an idea.
I think folks should be free to choose what level of AI use they’re comfortable with, without fear of being shamed or outed inherently for using AI. Those who don’t approve are free to downvote or ‘x’ to show their discontent. If enough people feel the same way then the incentives will do their job.
As I said on LinkedIn, you are still engaging with a human… a human that’s using AI.
I see AI as an interesting leveller of the playing field. It gives people less room to dismiss a point of view for being conveyed haphazardly in writing.
Writing well is a skill, built on the existential privilege of intelligence. If you don’t have it, does it mean you have less right to be heard?
We’re actually currently working on updating our policy on AI-generated content, so this thread (and follow up poll) is helpful! :)
I disagree pretty strongly with this.
Although there are tradeoffs associated with AI writing, mostly being able to produce content that can appear polished and well-considered when it is not, I think AI’s enabling the proliferation of good thoughts and ideas that would otherwise just never happen far outweighs this.
Going back and forth with AI, reviewing, and drafting can turn a writing process that might take several days to a week or more, into an hour or two, or less. This enables me, and I’m sure others, to share content and ideas that otherwise we would not be able to.
Removing the barriers to people sharing their thoughts quickly and effectively is probably how we get more new and impactful ideas out there. I’ve been pretty sad at the sort of witch-huntery I’ve been seeing about AI generated content.
I agree that It may enable you to share ideas a little faster (although I’m not sure by how much). Most individual good ideas could be expressed in a couple of paragraphs if need be.
I don’t buy though that you “wouldn’t be able to share them” otherwise. I’m happy for AI to help with your thoughts and ideas (brainstorming, ideating, research), just not with your final writing. I’m not convinced at all yet that AI is “enabling the proliferation of good thoughts and ideas” in a significant way. Can you share any evidence of that? I’ve not been very impresses with posts on the forum here that heavily use AI
I don’t think writing the final draft without AI is a huge barrier to sharing thoughts quickly and effectively. Insofar as it might be, I’d take the tradeoff the other way.
Its interesting that this is so polarising. I’m certainly one of those witch hunters at the moment at least. A year ago I was more OK with AI writing but I’m now vehemently against it after seeing Linkedin, which 2 years ago was a pretty interesting platform, deteriorate to low quality discourse full of AI slop in both the posts and the comments. On that platform at least it has lowered the quality of ideas and discourse, not improved them. I hope Substack doesn’t go the same way.
I have experience writing things with and without AI. At least for me, it can be a very difficult process trying to convey things as clearly and effectively as I can. Perhaps I am being unreasonable in putting that much time into the process and perhaps other people are just much better at writing clearly and effectively without AI. But I can say that I would not produce a lot of the content that I produce without AI being able to shorten the process significantly.
I’m surprised to read this, can you check your post on https://www.pangram.com/ ?
It wasn’t a very well written comment, was a bit benign and generic which is maybe why it ot flaggerd. Here it is below To their credit though they reinstated it.
”This seems to be a nice observational study which analyses already available data, with an interesting and potentially important finding.
They didn’t do “controlling” in the technical sense of the word, they matched cases and controls on 40 baseline variables in the cohort with “demographics, 15 comorbidities, concomitant cardiometabolic drugs, laboratories, vitals, and health-care utilization”
The big caveat here is that these impressive observational findings often disappear, or become much smaller when a randomised controlled trial is done. Observational studies can never prove causation. Usually that is because there is some silent feature about the kind of people that use melatonin to sleep, that couldn’t be matched for or was missed in the matching. A speculative example here could be that some silent, unknown illnesses could have caused people to have poor sleep—which lead to melatonin use. Also what if poor sleep itself led to poor cardiovascular health not the melatonin itself?
This might be enough initial data to trigger a randomised placebo control trial using melatonin. It might be hard to sign enough people up to detect an effect on mortality—although a smaller study could still at least pick up if melatonin caused cardiovascular disease.
I agree with their conclusion which I think is a great takeaway
”These findings challenge the perception of melatonin as a benign chronic therapy and underscore the need for randomized trials to clarify its cardiovascular safety profile.”
“
This is the pangram result
This was the lesswong rejection.
Literally just cranked out a 2 minute average quality comment and got accused of being a bot lol. Great introduction to the forum. To be fair they followed up well and promptly, but it was a bit annoying because it was days later and by that stage the thread had passed ant the comment was irrelevent.
Thanks for sharing! I’d have guessed they would be using something at least as good as pangram, but maybe it has too many false negatives for them, or it was rejected for other reasons and the wrong rejection message was shown.
As an ex forum moderator I can sympathize with them, not a fun job!
Have made a related poll :).
https://forum.effectivealtruism.org/posts/evTZ2mguA7eg4Kvmm/how-much-of-a-post-are-you-comfortable-for-ai-to-write