Somewhat meta point on epistemic modesty, calling it out here because it is a pattern that has deeply frustrated me about EA/rationalism for as long as I have known them: (making a quick take rather than commenting due to an app.operation_not_allowed error—I’m responding to @Linch’s quick take on war crimes) I guess these are just EA/rationalist norms, but an approach that glosses major positions as being so quickly dismissible strikes me as insufficiently epistemically modest. I would expect such a treatment will fail to properly consider alternative answers or intuitions to the author’s own, especially the strongest versions of those answers (e.g. modern just war positions), won’t consider the most sophisticated counterpoints (e.g. your ‘oldest and clearest form’ gambit may just be bracketing out the counterexamples that don’t fit your definition, like genocide or sexual violence), and reinvent the wheel, e.g. the view seems to be exactly this from 2013:
“A final rationale for the perfidy prohibition is to preserve the possibility of a return to peace. To prevent the degradation of trust and the bad faith between warring parties that would impede negotiation of peace terms. An effective perfidy prohibition preserves the good faith upon which ceasefires, armistices and conclusions of hostilities rely.”
I think deep engagement with the range of serious views on the topic is required to make your post “the best modern articulation of these ancient ideas”. I don’t think the quick take seems on a good track for that.
Thanks for the reference, and the point that the structural argument doesn’t handle all modern cases as well. Will address both in the post.
Though I’m confused. If you’re accusing me of reinventing the wheel, why reference Watts from 2013 and not Grotius in 1625? Or Didiotus in 427 BC, or other references in Thucydides?
I think EAs if anything are far too epistemically modest and unwilling to stick their neck out for defending true and accurate positions. I also find demands to police epistemic modesty based on hastily written quick takes annoying.
The de facto outcome if I take these concerns seriously is to showcase much less of my intermediate thinking, to bulletproof all my writing before they see the light of day, and/or crosspost to the EA Forum less.
I’ve indeed been taking actions like this, especially the last one, due to comments like yours (this is the third time you’ve done this), though I’m unsure if I endorse it on net.
I’m not trying to be unkind, and I apologise if I was. I’ll take this down if you ask here or via DM. I overreacted to what is a quick take because I think it was emblematic of a bad pattern—but that is unfair and disproportionate of me. My main thing here is to push for better intermediate thinking. Like the standard EA/rat approach is so often based on dismissing mainstream or non-EA views, and then acting like their individual opinion is clearly superior, often reinventing current or past views that have had lots of non-EA examination. I want EA thinking to be better, and a lot of the time it would be improved by people reading more before opining, and not thinking the views of EA are so special.
I think EAs if anything are far too epistemically modest and unwilling to stick their neck out for defending true and accurate positions.
We just have very different experiences then.
(this is the third time you’ve done this),
Do you mean critique someone on epistemic immodesty grounds? This is probably true but can you point me to the examples you have in mind? (I may indeed be doing this too much and seeing the examples would help)
Thanks for the much kinder response and the serious engagement! :) Please don’t take your comment down, it’s good to have this discussion in the open.
(Also apologies for the long comment, brain not working really well so less succinct than I want to be)
My main thing here is to push for better intermediate thinking. Like the standard EA/rat approach is so often based on dismissing mainstream or non-EA views, and then acting like their individual opinion is clearly superior.
I want to defend my own approach here, and won’t speak for the” standard EA/rat approach” except insomuch as my thinking is constitutive of that approach (as the old joke goes, “you’re not in traffic, you are traffic”). Generally when I try to learn information about the world, what I go for is to seek facts and models that are
interesting (ie, novel to me)
true
useful
The best way to do this typically involves some combination of Google searches, original thinking, reading papers, conversations, reading, toy models, and (since ~2025) talking to AIs[1]. Since college, I’ve honed an ability to form views very quickly that I can defend, and believe I’m reasonably calibrated on. I think this is sometimes surprising to people but it shouldn’t be. The first data point tells you a lot[2].
Similarly my bar for publishing my thoughts, ignoring opportunity cost, is also fairly low. The primary thing I’m interested in from a content perspective is some combination of novel/true/useful to my readers. Novel to whom? For me I have an implicit model of who my readers are and I try to calibrate accordingly. I want to write things that are new to a large fraction of my readers. I think you might have more of an academia-derived model where it’s very important to only share thoughts that are novel to humanity.
I think this is less good of a norm. If I can write a better intro to stealth than is widely understood/disseminated, I think this is a useful service even if no individual point there is original.
Similarly, I think it’s less important in non-academic contexts to attribute the originators of an idea or an analysis. I don’t think it’s useless, I just think it’s less important. But if I’m thinking about a problem the academic citations are mostly directly useful inasomuch as it benefits either me or my readers, rather than being the first line of attack.
To be clear, credit attribution is valuable and I want to avoid actual plagiarism (I think academic norms are valuable in a bunch of ways and I want to respect the institution even when I disagree with it).
Also, this may be nonobvious, but I do in fact “do the reading” and “expert engagement” significantly, often past the point of diminishing marginal returns compared to honing my own thinking or writing.
For example, in my earlier post on war, where I summarized and extended James Fearon’s bargaining model, I read Fearon’s paper and skimmed a bunch of others to form a gestalt view. I also emailed my post to both Fearon and another academic on war (Fearon replied positively, the other academic didn’t respond).
In my Chiang review I read something like 10 reviews before starting my piece, and maybe more like 20 before finishing it.
And for war crimes in particular, I’ve been reading about it casually for several years. See here for one example.
I also think it’s very easy to say “do the reading” but in practice what reading you do is highly contingent and it’s easy to waste a bunch of time feeling virtuous for doing the homework on adjacent topics but not actually learning useful things for addressing your original question. For example, you seem to believe that I should be reading the latest academic papers on just war (a plausible enough hypothesis!). Someone on Substack (with a relevant background!) suggested I read the negotiating history of the Geneva conventions and their Additional Protocols (also plausible!) Someone on LessWrong suggested I read Tom Schelling’s treatment of the subject (plausible enough, I ordered the book). And these are just the ones that I think are sufficiently plausible! There are so many other ways to burn time seeming to doing the reading instead of committing to a hypothesis and seeing where it lands.
Finally, I’d note that when you said people’s arguments are
so often based on dismissing mainstream or non-EA views
there is a major selection effect. If I think a mainstream view is both true and introduced well, I usually don’t bother writing about it.
__
> I think EAs if anything are far too epistemically modest and unwilling to stick their neck out for defending true and accurate positions.
We just have very different experiences then.
Concretely I think Bentham’s Bulldog/Matthew comes across as overconfident on his blog, as does John Wentworth on LessWrong. But most randomly selected writers on EAF and LW are underconfident and often hedge in 10 words what they could say in 3.
Maybe a background methodological difference here is that I strongly agree with Scott Alexander on the most useful forms of criticism (highly specific, targeted, concrete). Whereas I’m skeptical of deep paradigmatic criticisms really being correct, changing people’s minds, or overall being insightful/true/useful.
(this is the third time you’ve done this),
Do you mean critique someone on epistemic immodesty grounds?
I meant respond to my comments or posts in a way that seems asymmetrically easy to make but very hard to respond to/argue against on the object-level. I don’t want to dreg up the links, sorry.
Anyway, thanks for the response and for giving me an opportunity to elaborate my thoughts and overall position here.
this is for the types of questions I’m interested in, and my workflow. A historian might do more expert surveys. An ML researcher might run more experiments, a biologist might work in a wet lab, or a field, and so forth. In the past I also did more expert interviews.
If you have a model of the world/human epistemics where surprisal value is constant across learning time, or even that it’s highly superlinear per topic, then you might prioritize your actions very differently from me.
I feel like I’ve heard this position a lot before, and I have some sympathy for it, but I feel like it implicitly overlooks a lot of what I find valuable about writing EA Forum comments, and it sets an overly high bar.
When one writes academic papers, one is expected to cite relevant previous work. Credit assignation is an important mechanism for tracing the evidence for claims and for assigning credit. Even in academic spheres, I think this is perhaps taken pathologically far (to the point where it probably sometimes is unduly burdensome and vaguely implies that pretty obvious ideas or hypotheses had to have come from someone else as opposed to being generated by the author), but the reasons why it’s important to cite your claims seem a lot stronger in academia.
The EA Forum is partly intended, I believe, to be a place where people are encouraged to say things more quickly and speculatively after having done less research, and where people are more encouraged to share their own overall judgments and thinking process without necessarily fully defending all their positions. You might think it’s bad to have such a place and that people should mostly just rely on the academic literature. I disagree with that, but trying to make the EA Forum use the same standards that academia uses seems counterproductive. We can just use academia for that.
And at least in my mind, a big part of the point of writing things like what Linch wrote is about trying to practice my critical thinking skills and appling them to new areas, for the eventual purpose of use in areas where there’s not already a lot of scholarship. So I value approaching an area I don’t know much about, like the topic of war crimes, and trying to understand it on my own and seeing how far I can get and forming my own view rather than necessarily seeing this as strictly an opportunity to practice building on existing literature on war crimes (or worse, just regurgitating that literature undiscerningly)
Somewhat meta point on epistemic modesty, calling it out here because it is a pattern that has deeply frustrated me about EA/rationalism for as long as I have known them:
(making a quick take rather than commenting due to an app.operation_not_allowed error—I’m responding to @Linch’s quick take on war crimes)
I guess these are just EA/rationalist norms, but an approach that glosses major positions as being so quickly dismissible strikes me as insufficiently epistemically modest. I would expect such a treatment will fail to properly consider alternative answers or intuitions to the author’s own, especially the strongest versions of those answers (e.g. modern just war positions), won’t consider the most sophisticated counterpoints (e.g. your ‘oldest and clearest form’ gambit may just be bracketing out the counterexamples that don’t fit your definition, like genocide or sexual violence), and reinvent the wheel, e.g. the view seems to be exactly this from 2013:
I think deep engagement with the range of serious views on the topic is required to make your post “the best modern articulation of these ancient ideas”. I don’t think the quick take seems on a good track for that.
Thanks for the reference, and the point that the structural argument doesn’t handle all modern cases as well. Will address both in the post.
Though I’m confused. If you’re accusing me of reinventing the wheel, why reference Watts from 2013 and not Grotius in 1625? Or Didiotus in 427 BC, or other references in Thucydides?
I think EAs if anything are far too epistemically modest and unwilling to stick their neck out for defending true and accurate positions. I also find demands to police epistemic modesty based on hastily written quick takes annoying.
The de facto outcome if I take these concerns seriously is to showcase much less of my intermediate thinking, to bulletproof all my writing before they see the light of day, and/or crosspost to the EA Forum less.
I’ve indeed been taking actions like this, especially the last one, due to comments like yours (this is the third time you’ve done this), though I’m unsure if I endorse it on net.
I’m not trying to be unkind, and I apologise if I was. I’ll take this down if you ask here or via DM. I overreacted to what is a quick take because I think it was emblematic of a bad pattern—but that is unfair and disproportionate of me.
My main thing here is to push for better intermediate thinking. Like the standard EA/rat approach is so often based on dismissing mainstream or non-EA views, and then acting like their individual opinion is clearly superior, often reinventing current or past views that have had lots of non-EA examination. I want EA thinking to be better, and a lot of the time it would be improved by people reading more before opining, and not thinking the views of EA are so special.
We just have very different experiences then.
Do you mean critique someone on epistemic immodesty grounds? This is probably true but can you point me to the examples you have in mind? (I may indeed be doing this too much and seeing the examples would help)
Thanks for the much kinder response and the serious engagement! :) Please don’t take your comment down, it’s good to have this discussion in the open.
(Also apologies for the long comment, brain not working really well so less succinct than I want to be)
I want to defend my own approach here, and won’t speak for the” standard EA/rat approach” except insomuch as my thinking is constitutive of that approach (as the old joke goes, “you’re not in traffic, you are traffic”). Generally when I try to learn information about the world, what I go for is to seek facts and models that are
interesting (ie, novel to me)
true
useful
The best way to do this typically involves some combination of Google searches, original thinking, reading papers, conversations, reading, toy models, and (since ~2025) talking to AIs[1]. Since college, I’ve honed an ability to form views very quickly that I can defend, and believe I’m reasonably calibrated on. I think this is sometimes surprising to people but it shouldn’t be. The first data point tells you a lot[2].
Similarly my bar for publishing my thoughts, ignoring opportunity cost, is also fairly low. The primary thing I’m interested in from a content perspective is some combination of novel/true/useful to my readers. Novel to whom? For me I have an implicit model of who my readers are and I try to calibrate accordingly. I want to write things that are new to a large fraction of my readers. I think you might have more of an academia-derived model where it’s very important to only share thoughts that are novel to humanity.
I think this is less good of a norm. If I can write a better intro to stealth than is widely understood/disseminated, I think this is a useful service even if no individual point there is original.
Similarly, I think it’s less important in non-academic contexts to attribute the originators of an idea or an analysis. I don’t think it’s useless, I just think it’s less important. But if I’m thinking about a problem the academic citations are mostly directly useful inasomuch as it benefits either me or my readers, rather than being the first line of attack.
To be clear, credit attribution is valuable and I want to avoid actual plagiarism (I think academic norms are valuable in a bunch of ways and I want to respect the institution even when I disagree with it).
Also, this may be nonobvious, but I do in fact “do the reading” and “expert engagement” significantly, often past the point of diminishing marginal returns compared to honing my own thinking or writing.
For example, in my earlier post on war, where I summarized and extended James Fearon’s bargaining model, I read Fearon’s paper and skimmed a bunch of others to form a gestalt view. I also emailed my post to both Fearon and another academic on war (Fearon replied positively, the other academic didn’t respond).
In my Chiang review I read something like 10 reviews before starting my piece, and maybe more like 20 before finishing it.
And for war crimes in particular, I’ve been reading about it casually for several years. See here for one example.
I also think it’s very easy to say “do the reading” but in practice what reading you do is highly contingent and it’s easy to waste a bunch of time feeling virtuous for doing the homework on adjacent topics but not actually learning useful things for addressing your original question. For example, you seem to believe that I should be reading the latest academic papers on just war (a plausible enough hypothesis!). Someone on Substack (with a relevant background!) suggested I read the negotiating history of the Geneva conventions and their Additional Protocols (also plausible!) Someone on LessWrong suggested I read Tom Schelling’s treatment of the subject (plausible enough, I ordered the book). And these are just the ones that I think are sufficiently plausible! There are so many other ways to burn time seeming to doing the reading instead of committing to a hypothesis and seeing where it lands.
Finally, I’d note that when you said people’s arguments are
there is a major selection effect. If I think a mainstream view is both true and introduced well, I usually don’t bother writing about it.
__
Concretely I think Bentham’s Bulldog/Matthew comes across as overconfident on his blog, as does John Wentworth on LessWrong. But most randomly selected writers on EAF and LW are underconfident and often hedge in 10 words what they could say in 3.
Maybe a background methodological difference here is that I strongly agree with Scott Alexander on the most useful forms of criticism (highly specific, targeted, concrete). Whereas I’m skeptical of deep paradigmatic criticisms really being correct, changing people’s minds, or overall being insightful/true/useful.
I meant respond to my comments or posts in a way that seems asymmetrically easy to make but very hard to respond to/argue against on the object-level. I don’t want to dreg up the links, sorry.
Anyway, thanks for the response and for giving me an opportunity to elaborate my thoughts and overall position here.
this is for the types of questions I’m interested in, and my workflow. A historian might do more expert surveys. An ML researcher might run more experiments, a biologist might work in a wet lab, or a field, and so forth. In the past I also did more expert interviews.
If you have a model of the world/human epistemics where surprisal value is constant across learning time, or even that it’s highly superlinear per topic, then you might prioritize your actions very differently from me.
I feel like I’ve heard this position a lot before, and I have some sympathy for it, but I feel like it implicitly overlooks a lot of what I find valuable about writing EA Forum comments, and it sets an overly high bar.
When one writes academic papers, one is expected to cite relevant previous work. Credit assignation is an important mechanism for tracing the evidence for claims and for assigning credit. Even in academic spheres, I think this is perhaps taken pathologically far (to the point where it probably sometimes is unduly burdensome and vaguely implies that pretty obvious ideas or hypotheses had to have come from someone else as opposed to being generated by the author), but the reasons why it’s important to cite your claims seem a lot stronger in academia.
The EA Forum is partly intended, I believe, to be a place where people are encouraged to say things more quickly and speculatively after having done less research, and where people are more encouraged to share their own overall judgments and thinking process without necessarily fully defending all their positions. You might think it’s bad to have such a place and that people should mostly just rely on the academic literature. I disagree with that, but trying to make the EA Forum use the same standards that academia uses seems counterproductive. We can just use academia for that.
And at least in my mind, a big part of the point of writing things like what Linch wrote is about trying to practice my critical thinking skills and appling them to new areas, for the eventual purpose of use in areas where there’s not already a lot of scholarship. So I value approaching an area I don’t know much about, like the topic of war crimes, and trying to understand it on my own and seeing how far I can get and forming my own view rather than necessarily seeing this as strictly an opportunity to practice building on existing literature on war crimes (or worse, just regurgitating that literature undiscerningly)