Thanks for the much kinder response and the serious engagement! :) Please don’t take your comment down, it’s good to have this discussion in the open.
(Also apologies for the long comment, brain not working really well so less succinct than I want to be)
My main thing here is to push for better intermediate thinking. Like the standard EA/rat approach is so often based on dismissing mainstream or non-EA views, and then acting like their individual opinion is clearly superior.
I want to defend my own approach here, and won’t speak for the” standard EA/rat approach” except insomuch as my thinking is constitutive of that approach (as the old joke goes, “you’re not in traffic, you are traffic”). Generally when I try to learn information about the world, what I go for is to seek facts and models that are
interesting (ie, novel to me)
true
useful
The best way to do this typically involves some combination of Google searches, original thinking, reading papers, conversations, reading, toy models, and (since ~2025) talking to AIs[1]. Since college, I’ve honed an ability to form views very quickly that I can defend, and believe I’m reasonably calibrated on. I think this is sometimes surprising to people but it shouldn’t be. The first data point tells you a lot[2].
Similarly my bar for publishing my thoughts, ignoring opportunity cost, is also fairly low. The primary thing I’m interested in from a content perspective is some combination of novel/true/useful to my readers. Novel to whom? For me I have an implicit model of who my readers are and I try to calibrate accordingly. I want to write things that are new to a large fraction of my readers. I think you might have more of an academia-derived model where it’s very important to only share thoughts that are novel to humanity.
I think this is less good of a norm. If I can write a better intro to stealth than is widely understood/disseminated, I think this is a useful service even if no individual point there is original.
Similarly, I think it’s less important in non-academic contexts to attribute the originators of an idea or an analysis. I don’t think it’s useless, I just think it’s less important. But if I’m thinking about a problem the academic citations are mostly directly useful inasomuch as it benefits either me or my readers, rather than being the first line of attack.
To be clear, credit attribution is valuable and I want to avoid actual plagiarism (I think academic norms are valuable in a bunch of ways and I want to respect the institution even when I disagree with it).
Also, this may be nonobvious, but I do in fact “do the reading” and “expert engagement” significantly, often past the point of diminishing marginal returns compared to honing my own thinking or writing.
For example, in my earlier post on war, where I summarized and extended James Fearon’s bargaining model, I read Fearon’s paper and skimmed a bunch of others to form a gestalt view. I also emailed my post to both Fearon and another academic on war (Fearon replied positively, the other academic didn’t respond).
In my Chiang review I read something like 10 reviews before starting my piece, and maybe more like 20 before finishing it.
And for war crimes in particular, I’ve been reading about it casually for several years. See here for one example.
I also think it’s very easy to say “do the reading” but in practice what reading you do is highly contingent and it’s easy to waste a bunch of time feeling virtuous for doing the homework on adjacent topics but not actually learning useful things for addressing your original question. For example, you seem to believe that I should be reading the latest academic papers on just war (a plausible enough hypothesis!). Someone on Substack (with a relevant background!) suggested I read the negotiating history of the Geneva conventions and their Additional Protocols (also plausible!) Someone on LessWrong suggested I read Tom Schelling’s treatment of the subject (plausible enough, I ordered the book). And these are just the ones that I think are sufficiently plausible! There are so many other ways to burn time seeming to doing the reading instead of committing to a hypothesis and seeing where it lands.
Finally, I’d note that when you said people’s arguments are
so often based on dismissing mainstream or non-EA views
there is a major selection effect. If I think a mainstream view is both true and introduced well, I usually don’t bother writing about it.
__
> I think EAs if anything are far too epistemically modest and unwilling to stick their neck out for defending true and accurate positions.
We just have very different experiences then.
Concretely I think Bentham’s Bulldog/Matthew comes across as overconfident on his blog, as does John Wentworth on LessWrong. But most randomly selected writers on EAF and LW are underconfident and often hedge in 10 words what they could say in 3.
Maybe a background methodological difference here is that I strongly agree with Scott Alexander on the most useful forms of criticism (highly specific, targeted, concrete). Whereas I’m skeptical of deep paradigmatic criticisms really being correct, changing people’s minds, or overall being insightful/true/useful.
(this is the third time you’ve done this),
Do you mean critique someone on epistemic immodesty grounds?
I meant respond to my comments or posts in a way that seems asymmetrically easy to make but very hard to respond to/argue against on the object-level. I don’t want to dreg up the links, sorry.
Anyway, thanks for the response and for giving me an opportunity to elaborate my thoughts and overall position here.
this is for the types of questions I’m interested in, and my workflow. A historian might do more expert surveys. An ML researcher might run more experiments, a biologist might work in a wet lab, or a field, and so forth. In the past I also did more expert interviews.
If you have a model of the world/human epistemics where surprisal value is constant across learning time, or even that it’s highly superlinear per topic, then you might prioritize your actions very differently from me.
Thanks for the much kinder response and the serious engagement! :) Please don’t take your comment down, it’s good to have this discussion in the open.
(Also apologies for the long comment, brain not working really well so less succinct than I want to be)
I want to defend my own approach here, and won’t speak for the” standard EA/rat approach” except insomuch as my thinking is constitutive of that approach (as the old joke goes, “you’re not in traffic, you are traffic”). Generally when I try to learn information about the world, what I go for is to seek facts and models that are
interesting (ie, novel to me)
true
useful
The best way to do this typically involves some combination of Google searches, original thinking, reading papers, conversations, reading, toy models, and (since ~2025) talking to AIs[1]. Since college, I’ve honed an ability to form views very quickly that I can defend, and believe I’m reasonably calibrated on. I think this is sometimes surprising to people but it shouldn’t be. The first data point tells you a lot[2].
Similarly my bar for publishing my thoughts, ignoring opportunity cost, is also fairly low. The primary thing I’m interested in from a content perspective is some combination of novel/true/useful to my readers. Novel to whom? For me I have an implicit model of who my readers are and I try to calibrate accordingly. I want to write things that are new to a large fraction of my readers. I think you might have more of an academia-derived model where it’s very important to only share thoughts that are novel to humanity.
I think this is less good of a norm. If I can write a better intro to stealth than is widely understood/disseminated, I think this is a useful service even if no individual point there is original.
Similarly, I think it’s less important in non-academic contexts to attribute the originators of an idea or an analysis. I don’t think it’s useless, I just think it’s less important. But if I’m thinking about a problem the academic citations are mostly directly useful inasomuch as it benefits either me or my readers, rather than being the first line of attack.
To be clear, credit attribution is valuable and I want to avoid actual plagiarism (I think academic norms are valuable in a bunch of ways and I want to respect the institution even when I disagree with it).
Also, this may be nonobvious, but I do in fact “do the reading” and “expert engagement” significantly, often past the point of diminishing marginal returns compared to honing my own thinking or writing.
For example, in my earlier post on war, where I summarized and extended James Fearon’s bargaining model, I read Fearon’s paper and skimmed a bunch of others to form a gestalt view. I also emailed my post to both Fearon and another academic on war (Fearon replied positively, the other academic didn’t respond).
In my Chiang review I read something like 10 reviews before starting my piece, and maybe more like 20 before finishing it.
And for war crimes in particular, I’ve been reading about it casually for several years. See here for one example.
I also think it’s very easy to say “do the reading” but in practice what reading you do is highly contingent and it’s easy to waste a bunch of time feeling virtuous for doing the homework on adjacent topics but not actually learning useful things for addressing your original question. For example, you seem to believe that I should be reading the latest academic papers on just war (a plausible enough hypothesis!). Someone on Substack (with a relevant background!) suggested I read the negotiating history of the Geneva conventions and their Additional Protocols (also plausible!) Someone on LessWrong suggested I read Tom Schelling’s treatment of the subject (plausible enough, I ordered the book). And these are just the ones that I think are sufficiently plausible! There are so many other ways to burn time seeming to doing the reading instead of committing to a hypothesis and seeing where it lands.
Finally, I’d note that when you said people’s arguments are
there is a major selection effect. If I think a mainstream view is both true and introduced well, I usually don’t bother writing about it.
__
Concretely I think Bentham’s Bulldog/Matthew comes across as overconfident on his blog, as does John Wentworth on LessWrong. But most randomly selected writers on EAF and LW are underconfident and often hedge in 10 words what they could say in 3.
Maybe a background methodological difference here is that I strongly agree with Scott Alexander on the most useful forms of criticism (highly specific, targeted, concrete). Whereas I’m skeptical of deep paradigmatic criticisms really being correct, changing people’s minds, or overall being insightful/true/useful.
I meant respond to my comments or posts in a way that seems asymmetrically easy to make but very hard to respond to/argue against on the object-level. I don’t want to dreg up the links, sorry.
Anyway, thanks for the response and for giving me an opportunity to elaborate my thoughts and overall position here.
this is for the types of questions I’m interested in, and my workflow. A historian might do more expert surveys. An ML researcher might run more experiments, a biologist might work in a wet lab, or a field, and so forth. In the past I also did more expert interviews.
If you have a model of the world/human epistemics where surprisal value is constant across learning time, or even that it’s highly superlinear per topic, then you might prioritize your actions very differently from me.