I’m Ondřej Kubů, a postdoctoral researcher in mathematical physics at ICMAT Madrid, working on integrable Hamiltonian systems. I’ve engaged with EA ideas since around 2020—initially through reading and podcasts, then ACX meetups, and from 2023 more regularly with Prague EA (now EA Madrid after moving here). I took the GWWC 10% pledge during the event.
My EA focus is longtermist, primarily AI risk. My mathematical background has led me to take seriously arguments that alignment of superintelligent AI may face fundamental verification problems, and that current trajectories pose serious catastrophic risk. This shapes my donations toward governance and advocacy rather than technical alignment. I’m not ready to pivot careers at this stage—I’m contributing through donations while continuing in mathematics.
I attended EA Connect during job search, so sessions on career strategy and donation prioritization were particularly relevant.
On donation strategy
Joseph Savoie’s talk Twice as Good introduced the POWERS framework for improving donation impact: Price Tag (know the cost per outcome), Options (compare alternatives), Who (choose the right evaluator), Evaluate (use concrete benchmarks), Reduce (minimize burden on NGOs), Substance (focus on how charities work, not presentation).
The framework is useful but clearly aimed at large donors—”compare 10+ alternatives” and “hire someone to evaluate” aren’t realistic for someone donating 10% of a postdoc salary.
The “Price Tag” slide was striking: what $1 million buys across cause areas—200 lives saved via malaria nets, 3 million farmed animals helped through advocacy, 6.1 gigatons CO₂ mitigation potential through agrifood reform. But the X-Risk/AI line only specified inputs (“fund 3-4 research projects”), not outcomes. This reflects the illegibility problem I asked about in office hours: how do you evaluate AI governance donations? Savoie acknowledged he doesn’t donate much there for exactly this reason.
I still do—but the comparison made me consider allocating some portion toward more legible GCR causes like pandemic preparedness.
I should acknowledge a bias: I like interventions that require an initial push but then scale through standard market mechanisms. Rachel Glennerster’s talk on market shaping spoke to this—advance market commitments, paying based on outcomes delivered, aligning incentives so innovation and distribution happen without ongoing philanthropic dependence. The pneumococcal vaccine AMC is the canonical example.
On pre-takeoff actions
Will MacAskill argued that some things crucial for post-AI-takeoff wellbeing need to be done before takeoff begins—specifically, preventing lock-in of bad values and avoiding excessive power concentration. The general point resonated: certain intervention windows may close.
On moral circles
Jeff Sebo summarized research on consciousness and moral patienthood. The precautionary case for moral consideration extends to ever-smaller animals as evidence accumulates. On AI: researchers largely agree current systems aren’t conscious, but Sebo suggested we may cross the threshold warranting precautionary concern within 5-10 years. He noted Anthropic now has people working on AI welfare.
On alternative proteins
Bruce Friedrich argued that shifting to plant-based and cultivated meat could achieve roughly 4-6x the emissions reduction of electrifying all cars—based on the World Bank’s 2024 “Recipe for a Livable Planet” report (6.1 gigatons mitigation potential in agrifood) versus ~1-1.5 gigatons for EVs. He drew an analogy to solar: initial R&D push, then market scaling. This fits my bias toward market-scaling interventions.
He also noted traditional vegan advocacy hasn’t reduced meat consumption globally—it’s rising. Technological substitution may be more tractable than behavior change.
On the job market
Multiple sessions emphasized: EA organizations want agentic people who identify and do important work without being asked. Build a legible public profile. Network. Focus on interpersonal skills.
This advice is clear. It’s also harder for people whose strengths lie elsewhere—the sessions were oriented toward EA org careers, and how much transfers to academia is unclear.
I used Claude (Opus 4.5) to help draft this post; the ideas are my own.
EA Connect 2025: Personal Takeaways
Background
I’m Ondřej Kubů, a postdoctoral researcher in mathematical physics at ICMAT Madrid, working on integrable Hamiltonian systems. I’ve engaged with EA ideas since around 2020—initially through reading and podcasts, then ACX meetups, and from 2023 more regularly with Prague EA (now EA Madrid after moving here). I took the GWWC 10% pledge during the event.
My EA focus is longtermist, primarily AI risk. My mathematical background has led me to take seriously arguments that alignment of superintelligent AI may face fundamental verification problems, and that current trajectories pose serious catastrophic risk. This shapes my donations toward governance and advocacy rather than technical alignment. I’m not ready to pivot careers at this stage—I’m contributing through donations while continuing in mathematics.
I attended EA Connect during job search, so sessions on career strategy and donation prioritization were particularly relevant.
On donation strategy
Joseph Savoie’s talk Twice as Good introduced the POWERS framework for improving donation impact: Price Tag (know the cost per outcome), Options (compare alternatives), Who (choose the right evaluator), Evaluate (use concrete benchmarks), Reduce (minimize burden on NGOs), Substance (focus on how charities work, not presentation).
The framework is useful but clearly aimed at large donors—”compare 10+ alternatives” and “hire someone to evaluate” aren’t realistic for someone donating 10% of a postdoc salary.
The “Price Tag” slide was striking: what $1 million buys across cause areas—200 lives saved via malaria nets, 3 million farmed animals helped through advocacy, 6.1 gigatons CO₂ mitigation potential through agrifood reform. But the X-Risk/AI line only specified inputs (“fund 3-4 research projects”), not outcomes. This reflects the illegibility problem I asked about in office hours: how do you evaluate AI governance donations? Savoie acknowledged he doesn’t donate much there for exactly this reason.
I still do—but the comparison made me consider allocating some portion toward more legible GCR causes like pandemic preparedness.
Slides: https://gamma.app/docs/Twice-as-Good-gbziw2h5buka3os?mode=doc
On market shaping
I should acknowledge a bias: I like interventions that require an initial push but then scale through standard market mechanisms. Rachel Glennerster’s talk on market shaping spoke to this—advance market commitments, paying based on outcomes delivered, aligning incentives so innovation and distribution happen without ongoing philanthropic dependence. The pneumococcal vaccine AMC is the canonical example.
On pre-takeoff actions
Will MacAskill argued that some things crucial for post-AI-takeoff wellbeing need to be done before takeoff begins—specifically, preventing lock-in of bad values and avoiding excessive power concentration. The general point resonated: certain intervention windows may close.
On moral circles
Jeff Sebo summarized research on consciousness and moral patienthood. The precautionary case for moral consideration extends to ever-smaller animals as evidence accumulates. On AI: researchers largely agree current systems aren’t conscious, but Sebo suggested we may cross the threshold warranting precautionary concern within 5-10 years. He noted Anthropic now has people working on AI welfare.
On alternative proteins
Bruce Friedrich argued that shifting to plant-based and cultivated meat could achieve roughly 4-6x the emissions reduction of electrifying all cars—based on the World Bank’s 2024 “Recipe for a Livable Planet” report (6.1 gigatons mitigation potential in agrifood) versus ~1-1.5 gigatons for EVs. He drew an analogy to solar: initial R&D push, then market scaling. This fits my bias toward market-scaling interventions.
He also noted traditional vegan advocacy hasn’t reduced meat consumption globally—it’s rising. Technological substitution may be more tractable than behavior change.
On the job market
Multiple sessions emphasized: EA organizations want agentic people who identify and do important work without being asked. Build a legible public profile. Network. Focus on interpersonal skills.
This advice is clear. It’s also harder for people whose strengths lie elsewhere—the sessions were oriented toward EA org careers, and how much transfers to academia is unclear.
I used Claude (Opus 4.5) to help draft this post; the ideas are my own.
Aside: wow, the slide presentation you linked to above is both really aesthetically pleasing and has great content, thanks for sharing :)
Congrats on taking the pledge Ondřej!