I appreciate this article and find the core point compelling. However, I notice signs of heavy AI editing that somewhat diminish its impact for me.
Several supporting arguments come across as flimsy/obvious/grating/”fake” as a result. For example, the “Addressing the Predictable Objections” reads more like someone who hasn’t actually considered the objections but just gave the simplest answers to surface-level questions, rather than someone who deeply brainstormed or crowdsourced the objections to the framework. Additionally, the article’s tendency towards binary framings makes it hard for me to think through the relevant tradeoffs.
The fundamental argument is strong. I also appreciate the emphasis towards truth and evident care to remove inaccuracies. I imagine there was significant editing effort to avoid hallucinations. Nonetheless the breezy style makes it hard for me to read, and I’d appreciate seeing it developed with more depth and authentic engagement with potential counterarguments.
You’re correct that I used AI as an editor—with limited time, it was that or no post at all. That resource allocation choice (ship something imperfect but provocative vs. nothing) exemplifies the framework itself. I think more people should use AI to help develop and present their ideas rather than letting perfectionism or time constraints prevent them from contributing to important discussions.
The post was meant to provoke people to examine their own sacrifice allocations, not to be a comprehensive treatise. The objections section covers the predictable first-order pushbacks that stop people from even considering the framework. Deeper counterarguments about replaceability, offset quality, and norm-setting are important but would require their own posts.
The binary framings you note are intentional—they make the core tension vivid. Most people’s actual optimization should be marginal reallocation within their portfolio, but “consider shifting 20% of your sacrifice budget” doesn’t create the same useful discomfort as “Bob does more good than Alice.”
The core point is that we should recognize how individual particularities—income, skills, psychological makeup, social context—dramatically affect how each person can maximize their impact. What’s high-RoS for one person may be terrible for another. When we evaluate both our own choices and others’ contributions, we need to account for these differences rather than applying uniform standards of virtue. The framework makes these personal tradeoffs explicit rather than hidden.
The “input” wasn’t a clean document—it was scattered notes, examples, and iterative revisions across multiple sessions. The final post is more coherent and useful than raw process documentation would be. We don’t ask other posters to share their drafts, notes, or feedback chains either. The substance and arguments are mine; AI helped with structure and editing.
That’s fair. I was imagining you wrote an outline and then fed the outline into an LLM. I usually prefer reading outlines over long posts, and I think it’s good practice to have a summary at the top of a post that’s basically your outline.
I appreciate this article and find the core point compelling. However, I notice signs of heavy AI editing that somewhat diminish its impact for me.
Several supporting arguments come across as flimsy/obvious/grating/”fake” as a result. For example, the “Addressing the Predictable Objections” reads more like someone who hasn’t actually considered the objections but just gave the simplest answers to surface-level questions, rather than someone who deeply brainstormed or crowdsourced the objections to the framework. Additionally, the article’s tendency towards binary framings makes it hard for me to think through the relevant tradeoffs.
The fundamental argument is strong. I also appreciate the emphasis towards truth and evident care to remove inaccuracies. I imagine there was significant editing effort to avoid hallucinations. Nonetheless the breezy style makes it hard for me to read, and I’d appreciate seeing it developed with more depth and authentic engagement with potential counterarguments.
Thanks for reading and engaging, Linch.
You’re correct that I used AI as an editor—with limited time, it was that or no post at all. That resource allocation choice (ship something imperfect but provocative vs. nothing) exemplifies the framework itself. I think more people should use AI to help develop and present their ideas rather than letting perfectionism or time constraints prevent them from contributing to important discussions.
The post was meant to provoke people to examine their own sacrifice allocations, not to be a comprehensive treatise. The objections section covers the predictable first-order pushbacks that stop people from even considering the framework. Deeper counterarguments about replaceability, offset quality, and norm-setting are important but would require their own posts.
The binary framings you note are intentional—they make the core tension vivid. Most people’s actual optimization should be marginal reallocation within their portfolio, but “consider shifting 20% of your sacrifice budget” doesn’t create the same useful discomfort as “Bob does more good than Alice.”
The core point is that we should recognize how individual particularities—income, skills, psychological makeup, social context—dramatically affect how each person can maximize their impact. What’s high-RoS for one person may be terrible for another. When we evaluate both our own choices and others’ contributions, we need to account for these differences rather than applying uniform standards of virtue. The framework makes these personal tradeoffs explicit rather than hidden.
Am I right that a bunch of the content of this response itself was written by an AI?
What if you took whatever input you fed to the AI and posted that instead?
The “input” wasn’t a clean document—it was scattered notes, examples, and iterative revisions across multiple sessions. The final post is more coherent and useful than raw process documentation would be. We don’t ask other posters to share their drafts, notes, or feedback chains either. The substance and arguments are mine; AI helped with structure and editing.
That’s fair. I was imagining you wrote an outline and then fed the outline into an LLM. I usually prefer reading outlines over long posts, and I think it’s good practice to have a summary at the top of a post that’s basically your outline.