Information intake: The third leg of rationalism

The argument of this post is that agentic AI tools have collapsed the cost of staying informed from a part-time job to roughly the cost of a magazine subscription. A perfect Bayesian on a narrow default feed loses to a mediocre updater on a wide custom one. I’m posting it here rather than on LessWrong because it builds on my experiences with effective altruism specifically. I’m also

If Odin Had a Million Ravens

Odin had two ravens. Every day they flew out across the world, watched everything, and returned at dusk to whisper what they had seen. That is how the all-father stayed all-knowing. Not through divine intuition, but through a two-bird intelligence service.

Now imagine Odin with a million ravens.

He would receive every rumor from every hall, every whispered threat, every petty quarrel between every farmer in Midgard. He would know more. He would also know less, because the signal would vanish into the flock. A million ravens is not an upgrade. It is a denial-of-service attack on a god.

Staying informed is an optimal stopping problem. The two-raven setup worked because Odin had picked a point on the curve where he got enough to act and not so much that he froze. Modern information feeds default to the million-raven configuration, not to mention the biases towards controversy and drama.

The third leg of rationalism

Adam Grant’s Think Again calls it the scientist’s mindset: treat beliefs as hypotheses, update them on evidence, and enjoy the moment you catch yourself being wrong. Julia Galef calls it the scout mindset, with emphasis on forming accurate maps as opposed to the soldier mindset of defending beliefs as territory. Both of these echo the Bayesian discipline of updating beliefs in proportion to evidence. Put them all in a Venn diagram and the overlap is most of what people mean when they say “rationalist”: see clearly, treat beliefs as provisional, update when the evidence moves.

All of these operate on information once it arrives. None of them says much about how it arrives.

Rationalism is usually carved into two halves. Epistemic rationality, which gets from evidence to accurate beliefs, and instrumental rationality, which gets from accurate beliefs to effective action. A large literature exists on both. What gets skipped is the layer upstream of both. The pipeline that decides what evidence even reaches you. Call it the final third. The intake layer.

The three multiply. You can have a perfect scout-scientist-Bayesian running on a garbage intake pipeline and still end up with a garbage map. A brilliant updater with a narrow feed loses to a mediocre updater with a wide one. The sharpest reasoning in the world cannot rescue you from not knowing the thing you needed to know, like excellent software running on terrible inputs.

Why even EA gets this wrong

I spend a lot of time talking to people in the effective altruism community. They take probabilities seriously. They do expected-value calculations for charitable donations.

And most of them consume information about the world the same way everyone else does. Salient headlines from a dozen outlets. A Twitter feed shaped by whatever the algorithm decided last Tuesday. A podcast on the commute. An email newsletter or two they skim when they remember.

This is not stupid, it is just the default. Setting up something better takes real effort, and the payoff is invisible until months later when you notice you saw a story develop that your peers only caught at the crisis point. But for people whose stated goal is to do the most good per unit of effort, it is strange that the information layer gets so little attention. If you are going to make high-stakes bets on which causes matter, the quality of your raw feed is not a side issue. It is upstream of everything.

What news is actually for

News is most valuable when stakes are highest and action is cheap. A petition to sign. A demonstration to join. A policy window that closes in a week. A donation that arrives before a matching deadline. In these moments, a fast feed beats a slow feed, and a wide feed beats a narrow one.

Most of the time, though, news is not in this mode. Most of the time the best outlet is giving you a five-hundred-word article where the actionable content is one sentence. The signal-to-noise ratio of reading a dozen full articles per day is not good enough. You finish informed about the topics the editors chose and ignorant about everything else, which might include the three stories that actually matter for your life or your work.

My claim is that 40 three-sentence summaries is a better daily diet than a dozen full articles. Not always. Not for every reader. But as a default, for someone who seeks breadth enough to not miss the big shifts, and who will then go deep on the three or four items that turn out to deserve it.

The 40 summaries give you coverage. They tell you that a coup happened, that a paper came out, that a company shipped, that a law passed. The 12 full articles give you depth on whatever eight things the editor picked. Depth is valuable, but you should be picking the depth, not delegating the pick to an algorithm whose goal is to keep you on the page.

Not just news

Blog posts are where this argument actually pays. News tells you that things happened. Blogs, papers, long essays, substacks, forum posts are where someone sits down and tries to work out what those things mean, and most of the view-updating content lives in the second category, not the first. A careful post from a thoughtful writer can move your model more than a month of Reuters headlines.

Anticipated objections

“Summaries are lossy. You miss the nuance.” Yes, on the summaries. No, on the system. The schema includes direct quotes specifically so that you can spot-check the agent’s framing and click through when something is interesting.

“Hasn’t Zvi and Scott and others already covered information diets?” They write some of the best blogs in the recommended list below. They have written about what to read; they have not, as far as I have seen, written much about how to operationalize a curated pipeline that handles the volume for you. The novelty here is the cost collapse, not the underlying observation that information diets matter.

“Hallucinations.” Real, and the reason for the direct-quote field in the schema. If a quote does not Ctrl+F to a real sentence in the source, the card is suspect and gets discarded. In practice this happens rarely with current frontier models on the kind of well-formatted blog content this pipeline targets, but the verification mechanism is the point: do not rely on the agent being honest, rely on it being checkable.

The custom feed

Setting up a custom news and blog feed has been the single largest upgrade to my information diet in years. Not because any one source is magical. Because the aggregate shape changed.

For years the honest answer to “how do I actually optimize my intake?” was that you could not, really, not past a certain point. Building a real pipeline meant dealing with RSS, curating source lists, doing your own triage, and absorbing all of it yourself. It was a part-time job most people were reasonably unwilling to take on.

What made this radically easier in the last year is that you no longer need to build the pipeline yourself. MiniMax Agent, Clawdbot, and Claude Cowork will happily chew through several hundred posts a day, filter out administrative updates and near-duplicates, and hand you each remaining item in a structured card. This is not an advertisement of any of the aforementioned, and I’m not saying you should relax and let AI do your thinking for you, but they all cost less per month than a single magazine subscription (Claude excluded).

The schema I found most useful has four fields:

  • A one-sentence summary.

  • Two to five direct quotes, short enough that you can Ctrl+F them in the source to verify the agent is not hallucinating.

  • One sentence on “assuming this is true, how would it update my views.” And notice the agent does not need to know what your views currently are. It only needs to point you at the direction of the update; you do the update yourself.

  • An impact score from one to ten, so you can skip the bottom half at a glance.

What you get out of this is breadth I could never get from a handful of outlets, the raw material to notice patterns across sources where most of the interesting updates actually live, and the hours I used to spend doomscrolling, now spent on the two or three pieces per week that genuinely deserve a careful read.

The principle is boring and general. Information consumption is a system, not a habit. Treat it like one. Decide what you want breadth on and what you want depth on. Decide what you want to catch and what you are willing to miss. Choose your feeds deliberately, and now also choose your filter, instead of accepting the defaults a handful of companies have picked for you.

The hardest part of building a feed like this is not the agent. It is knowing which sources to point it at. Good writing does not surface itself. The best blogs are rarely well-SEO’d, the best Substacks grow through inbound links from substacks you don’t read yet, and the big recommendation engines will faithfully steer you back to the same defaults this post was written against. The cold-start problem is real, and most people quietly give up on solving it. So here is my opinionated starting list. It leans toward EA, rationalist, and AI-safety-adjacent writing, and skews toward existential-risk-relevant material, which reflects what I actually read rather than any claim to neutrality. Treat it as a seed, not a prescription. Subtract the ones that do not fit your priorities, and add whichever three or four the authors you like keep linking to.

General:

AI-specific:

The agent will handle the volume. You just have to pick once.

As for the prompt with which to get an agent to start curating, here is the one I used:

“You have been given a list of blogs/​newsletters. Set up a daily recurring task: For each source, check whether there is any new content since yesterday. If there is no new content, state that explicitly and move on to the next source.

output: • A table with the feed name, post title and a number between 0 and 100 to indicate how likely it is to make me update my views based on it. • A small number of the most meaningful direct quotes from the text. Include only quotes that carry the core insight; omit anything routine or explanatory. • A final line labeled “Belief update:” that states, in one sentence, how a well-informed reader should update their beliefs if the quoted claims are true.

Rules: – Use only direct quotes from the source for the bullet points. – Keep the number of quotes minimal; prefer depth and signal over coverage. – Do not paraphrase or summarize the quotes. – Do not include commentary, praise, critique, or filler text. – If a post contains no meaningful insights, say so explicitly and still include a “Belief update:” line stating “no update.” – Treat each feed independently and do not try to unify framing across sources. – Start each message with “------------------------------”

Your primary goal is to surface what is genuinely surprising, striking, or noteworthy. Focus on insights, arguments, empirical findings, or framing choices that are non-obvious, counterintuitive, or meaningfully advance understanding compared to what a well-informed reader would already expect from that author or domain.

De-emphasize routine commentary, throat-clearing, and obvious restatements of known positions. If a piece contains no such nuggets, say so explicitly rather than padding the summary.

For each item, include: – A one- or two-sentence neutral summary of what the piece is about. – A short section titled “Notable insights” that highlights the most surprising or insight-dense points, written concretely and precisely. – If relevant, briefly note why the insight matters or what it updates compared to prior assumptions.

Do not editorialize, praise, or critique unless the novelty or weakness itself is the noteworthy feature. Keep the tone analytical, concise, and source-faithful.

Do not make anything up; rely strictly on the provided text.”

This post is an imperfect start. The source list reflects what I read, not what is optimal. The prompt has been iterated maybe a dozen times and almost certainly has more rounds of improvement left in it. What I would most like to see is someone taking this further; Better source lists for cause areas I do not follow as closely. A version of the prompt that handles podcasts, papers, or non-English sources well. If this post mostly serves as a target for someone else to improve on, that is a better outcome than if it stands as the definitive word on the subject.

No comments.