The Curse of Not Being Seen: A Note on EA Hiring and Distance

Epistemic status: Tentative and personal. I’m somewhat biased by the frustration of unemployment and repeated rejections and very open to being corrected, especially by people with access to hiring data. Still, this is a pattern I’ve noticed over several years, and I thought it was worth articulating.

TL;DR

  • I live in Rome and, since 2019, I’ve sent ≈70 applications to EA-aligned roles, receiving only 3 first-round interviews.

  • I suspect that being physically and socially distant from core EA hubs (SF, Oxford, Berlin) may reduce candidate visibility.

  • My background is atypical (founder, content creator, few institutional references), which might make me harder to evaluate.

  • Over-reliance on public writing (e.g. Forum posts) may disadvantage those better at execution than exposition.

  • I propose testing a two-step blind screening: evaluate anonymized work first, then review full profiles. Curious to hear thoughts.

Intro

Over the course of the years (dating back to 2019), I’ve applied to a high (70) number of EA-aligned jobs—from research assistant positions to operational roles, mostly in organizations based in SF, New York, Loxbridge, and the usual hotspots. I rarely make it to a first-round interview (3 times max)[1]

Now, let me be clear up front: this could very well be because I’m not a strong candidate. Maybe my experience isn’t quite relevant. Maybe my writing isn’t that great. Maybe I’m just not what these orgs are looking for. That’s completely plausible, and I don’t want to fall into the classic trap of “if they didn’t pick me, the system must be broken.”

But I do wonder if there’s another contributing factor that’s worth considering—namely, the effect of not being physically or socially present within the EA ecosystem.

I’ve mostly worked independently—startup founder, content creator, occasional freelancer. I don’t have institutional affiliations or high-prestige references. I’ve never co-worked at Trajan House[2] or attended a fellowship. I live in Rome, which doesn’t have a strong EA presence.

As a result, my applications often go out into the void. There’s almost no mutual contact who can vouch for me. No familiar name in my CV that resonates with someone on the hiring panel. Mostly friends, not professional referees, who could at best say, “yeah, I’ve met him, he seems thoughtful and capable.”

And that makes me suspect that a kind of proximity bias might be quietly operating in the selection processes. Not in a malicious or intentional way—just as a natural byproduct of how humans process uncertainty. When two candidates have similar profiles on paper, it’s easier to trust the one whose context you understand, who lives nearby, or who’s been seen around. Legibility becomes a tiebreaker.

Now, one might argue—and reasonably so—that context should matter. Understanding where someone comes from, how they operate, and who can vouch for them is part of evaluating culture fit and collaborative potential. And from a hiring manager’s perspective, that makes sense: you want to reduce downside risk.

But the downside of this is that we may be filtering out potentially strong candidates before we ever see what they can actually do.

“But Just Write on the Forum Bro”

I’m aware that visibility doesn’t just come from physical presence. One common suggestion—especially within the EA community—is to build credibility by posting on the Forum. And to be fair, that can be a great way to showcase your thinking, signal alignment, or get feedback[3]

But it’s also worth asking: how strongly should we weigh public posting as a proxy for value-add?

For many roles—especially operational, logistical, or execution-heavy ones—public writing isn’t always relevant. Some people don’t enjoy writing. Others are better at doing than explaining. Still others may be older, working full-time, parenting, or living in places with low EA density and few incentives to write for an audience they don’t naturally engage with.

Relying too much on Forum visibility can subtly bias us toward a certain personality type: analytically expressive, cognitively extroverted, and culturally fluent in EA-speak. That’s not bad per se—but it might lead us to overlook different but equally valuable kinds of contributors.

A modest proposal

What I’m suggesting is something fairly lightweight: more experimentation with two-step blind screening in EA hiring.[4]

  1. Step one: strip out names, locations, university names, and references from the initial application. Evaluate candidates based on short, role-relevant tasks—writing samples, spreadsheets, case responses, whatever best matches the role.

  2. Step two: only after shortlisting based on the anonymized task do reviewers gain access to the full CV, location, and references. Then you proceed as normal with interviews and contextual evaluation.

This doesn’t eliminate the value of context—it just shifts it later in the funnel, after you’ve already had a chance to assess actual work. In many cases, that might mean giving a chance to people whose backgrounds initially seem unfamiliar, but whose output is surprisingly strong.

Sure, this process adds a little complexity. It might be marginally more time-consuming, especially for small orgs. And yes, it might occasionally let through candidates who “perform well on paper” but don’t integrate well in practice. But those costs may be outweighed by the benefits of broadening our talent discovery radius—especially in a movement that cares so much about cause neutrality, evidence-based thinking, and impartiality.

More importantly, this is a testable claim. If this hasn’t been done already it is possible to run a pilot: take a round of applications, evaluate half through the standard process and half via a blind-first pipeline. Compare which candidates make it to final interviews, and how satisfied the hiring managers are with each group. It’s a small empirical question with potentially large implications.

I don’t want to overstate the case. Maybe I’m wrong (please reach out to me if you want to discuss in details why you think I am), and the impact of proximity bias is negligible. Or maybe it’s real, but not fixable in practice. But from my personal experience—and from conversations with others outside the core hubs—it feels like there’s something here worth investigating.

EA has built its reputation on doing what works, even when it’s unintuitive or effortful. If we suspect our current filters are systematically excluding certain types of valuable talent, shouldn’t we at least be curious enough to check?

If nothing else, blind hiring would finally spare hiring panels from mispronouncing my very Italian surname.


Appendix—Why this might actually be worth it

Back-of-the-envelope, ±1 order of magnitude.

Say a typical hiring round receives around 100 applications. Implementing a two-step blind process—where the first phase strips identifying information and evaluates anonymized work samples—might add about 3 hours of extra work in total: anonymizing CVs, reviewing short tasks, and matching candidates back to their identities. Assuming a fully loaded staff cost of $100/​hour, the added cost per round would be around $300.

Now, suppose that just 1 in 100 applicants—filtered in by this process but who might otherwise have been overlooked due to lack of visibility, proximity, or social capital—turns out to be a genuinely high-impact hire. Let’s conservatively estimate that such a person produces $200,000 more impact (over 3–5 years) than the median hire, whether through better execution, lower coordination costs, or more initiative. That gives an expected benefit of $2,000 per round, for a cost of $300—a benefit-to-cost ratio of almost 7 to 1.

Even under more pessimistic assumptions—say, only 1 in 200 applicants yields such upside, and the marginal impact is just $100,000—the expected benefit is still $500, which exceeds the cost. In fact, the intervention only becomes net-negative if the probability of surfacing a high-impact hire drops below ~0.15% or if the marginal value of such a hire is below $45,000—both of which seem implausibly low given the known variance in individual employee performance.

In short, the upside is asymmetric: even if we surface just one overlooked high-performing candidate every few rounds, the total value gained far exceeds the operational cost. And all of this is testable: a few pilots would be enough to assess whether blind-first screening meaningfully improves talent discovery.

  1. ^

    To be fair I’ve had some success in the past, by getting a few grants. But getting a grant is a different story compared to getting a job.

  2. ^

    Well, I’ve been there one afternoon in 2022, but just for lunch

  3. ^

    And I’m planning to do it more myself

  4. ^

    I’m aware that some research already exist on blind hiring. I’ve skimmed a few but haven’t read them carefully enough to cite them responsibly here. If you have links to relevant data or case studies, I’d be grateful.