I think the main point this piece is making is broadly interesting, but I don’t like the presentation.
> Kurzgesagt’s most recent video promoting the introducing of wild life to other planets is unethical and irresponsible
This uses emotional language very similar to what I see coming from people trying to ban content or creating long Twitter attacks.
I’d push back against us getting emotionally charged about each instance like this.
If you have access to ChatGPT, I recommend using that to help write things in what it considers the style of the effective altruism forum. [Edit: I don’t mean this to be demeaning. There must also be good conversational norm documents on the EA Forum too. I’m sure some people would prefer those—personally, I’d find ChatGPT more useful here)
I really didn’t mean this in a demeaning way. I genuinely like ChatGPT a lot, think it can be really useful here.
I’d like to write posts about my preferred online etiquette and the usual etiquette of the community, but I think it’s really tough to do so in a way that’s general enough to cover most of the cases, but still somewhat interesting to read.
Thanks for the comment, Ozzie. I appreciate the constructive criticism.
I did go back and forth on a few title ideas, but ended up with this one as I believed it to be succinct and in-line with what I think, along with being engaging for readers to click. I can see how this may have rubbed you the wrong way, and I do have access to ChatGPT—I’ll check it out.
I also agree that we should avoid emotionally charged language as much as possible, but I do think there’s a balance to be struck in making the initial statement, for example a title, compelling enough for people to read while staying true to the content it’s conveying.
I wish people would stop optimizing their titles for what they think would be engaging to click on. I usually downvote such posts once I realize what was done.
I ended up upvoting this one bc I think it makes an important point.
This is a picture perfect demonstration of how EA has an emotions problem. Someone writes an emotional post and your primary response is to tell them that that’s not in the “style of the EA Forum”? In the comments and in the body of the post, the OP is very constructive and generous and in my view follows all the discursive norms we want on the forum. I don’t think this comment is constructive, even before the absolute backhand of telling them to replace their voice with an AI.
I’m not sure what you want me to say here. I imagine there’s little response I could give at this point that would change your mind on this.
I’m obviously don’t like being called the “picture perfect demonstration of how EA has an emotions problem”.
Happy to try to have further discussions in PMs or similar, especially with David van Beveren, who I tried, perhaps unsuccessfully, to give some advice to.
Again, I wasn’t using the GPT thing as an insult. Personally, I’m rewriting a lot of my work with AI assistants.
I think the main point this piece is making is broadly interesting, but I don’t like the presentation.
> Kurzgesagt’s most recent video promoting the introducing of wild life to other planets is unethical and irresponsible
This uses emotional language very similar to what I see coming from people trying to ban content or creating long Twitter attacks.
I’d push back against us getting emotionally charged about each instance like this.
If you have access to ChatGPT, I recommend using that to help write things in what it considers the style of the effective altruism forum. [Edit: I don’t mean this to be demeaning. There must also be good conversational norm documents on the EA Forum too. I’m sure some people would prefer those—personally, I’d find ChatGPT more useful here)
FWIW I had a much stronger negative emotional reaction to this sentence than to the title & tone of the post.
Thanks for flagging!
I really didn’t mean this in a demeaning way. I genuinely like ChatGPT a lot, think it can be really useful here.
I’d like to write posts about my preferred online etiquette and the usual etiquette of the community, but I think it’s really tough to do so in a way that’s general enough to cover most of the cases, but still somewhat interesting to read.
This is one extreme example, but I’ve been writing about this a bit more on my Facebook feed.
https://www.facebook.com/ozzie.gooen/posts/pfbid02JiqiMnWoEme61EhUiZfDSTP9nFV5E1zBanThKfQMcDZXhW1zvRX9gutMeZnNnovsl
Yeah I regularly see Ozzie talk about replacing language styles with ChatGPT, hardly a new thing for this comment.
Thanks for the comment, Ozzie. I appreciate the constructive criticism.
I did go back and forth on a few title ideas, but ended up with this one as I believed it to be succinct and in-line with what I think, along with being engaging for readers to click. I can see how this may have rubbed you the wrong way, and I do have access to ChatGPT—I’ll check it out.
I also agree that we should avoid emotionally charged language as much as possible, but I do think there’s a balance to be struck in making the initial statement, for example a title, compelling enough for people to read while staying true to the content it’s conveying.
In either case, thanks for the comment.
I wish people would stop optimizing their titles for what they think would be engaging to click on. I usually downvote such posts once I realize what was done.
I ended up upvoting this one bc I think it makes an important point.
This is a picture perfect demonstration of how EA has an emotions problem. Someone writes an emotional post and your primary response is to tell them that that’s not in the “style of the EA Forum”? In the comments and in the body of the post, the OP is very constructive and generous and in my view follows all the discursive norms we want on the forum. I don’t think this comment is constructive, even before the absolute backhand of telling them to replace their voice with an AI.
I’m not sure what you want me to say here. I imagine there’s little response I could give at this point that would change your mind on this.
I’m obviously don’t like being called the “picture perfect demonstration of how EA has an emotions problem”.
Happy to try to have further discussions in PMs or similar, especially with David van Beveren, who I tried, perhaps unsuccessfully, to give some advice to.
Again, I wasn’t using the GPT thing as an insult. Personally, I’m rewriting a lot of my work with AI assistants.
It was unfairly snarky, I’m sorry about that.