This account is used by the EA Forum Team to publish summaries of posts.
SummaryBot
Executive summary: This post offers a thoughtful, step-by-step framework for outreach conversations aimed at encouraging people to reduce animal suffering by gently connecting their existing values with their consumer choices, emphasizing empathy, non-confrontation, and gradual change rather than aggressive debate or moral pressure.
Key points:
Effective outreach involves speaking calmly and positioning oneself to reduce confrontation, fostering a safe space for open dialogue.
The post uses a metaphor of being overwhelmed (like falling into a deep pool) to highlight why people may resist rapid change and why breaking down ideas into manageable steps is crucial.
Conversations should explore shared values first (e.g., opposition to cruelty), then gently link those values to everyday actions such as purchasing decisions, inviting reflection without judgment or condescension.
Avoid confrontational or accusatory approaches that project assumptions or force people into defending inconsistent positions, as this tends to provoke defensiveness and can be counterproductive.
Instead, guide people to reconsider how societal norms about animal products may conflict with their deeper values for compassion, allowing them to acknowledge inconsistencies on their own terms.
The approach includes practical dialogue examples that progress from discussing emotional reactions to animal suffering to encouraging small commitments toward reducing support for harmful practices, emphasizing respect for individual choice and gradual alignment of actions with values.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: This personal-reflective post explores the fragile and rare nature of genuine altruism beyond evolutionary self-interest, emphasizing goodness as a fundamental, beautiful ideal rooted in trust, care, and moral openness that Effective Altruism uniquely tries to cultivate despite social and evolutionary challenges.
Key points:
Altruism likely evolved as a trait favored for enhancing individual reproductive success through cooperation and trust, which complicates defining it as truly selfless; yet, most altruistic intentions feel genuine and unconscious rather than purely calculated.
The author distinguishes goodness as a deep, shared moral emotion—calm, patient, other-oriented, and free from fear or self-protection impulses—that transcends mere strategic or consequentialist reasoning common in Effective Altruism discussions.
Genuine goodness manifests especially in caring for those who cannot reciprocate, such as non-human animals, where altruism is purely for goodness’s sake rather than self-interest or mutual benefit.
The natural world’s brutal scarcity and evolutionary history limit the emergence of widespread altruism, making genuine self-sacrifice a novel and fragile adaptation, but recent material abundance and technology create new space for expanding it.
Effective Altruists exemplify this emerging goodness by prioritizing the welfare of all sentient beings, including distant future generations, even at personal cost, though motives may vary and include social signaling.
The post highlights the importance of protecting and nurturing this “tiny flicker” of altruism and goodness as it holds the promise of a world where trust, security, and mutual care are widespread, reducing conflict and enabling more joy and flourishing.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: Notify Health’s pilot vaccination reminder program in Nigeria shows promising early evidence that scalable, low-cost SMS and voice call reminders can significantly improve timely childhood vaccination rates by addressing caregiver knowledge gaps, with a clear plan to expand and enhance cost-effectiveness despite some measurement limitations.
Key points:
Problem context: Millions of children, especially in Nigeria, miss routine vaccinations due to knowledge barriers like forgetting appointments or misunderstanding schedules; Nigeria has the world’s highest number of zero-dose children.
Intervention design: Notify Health uses digitized immunization registers and automated SMS plus voice reminders (in local languages) sent before vaccine due dates to caregivers, addressing the demand-side gap inexpensively and at scale.
Pilot results: Over 2,200 children were enrolled in Kogi State, achieving improved data quality and sending 42,000+ reminders; timely vaccination rates for key vaccines (Penta-1 and Penta-2) rose by 12–24 percentage points among enrolled children compared to unenrolled peers.
Caregiver feedback and operational feasibility: Most caregivers found reminders helpful, and the automated system handled high message volumes well; however, a notable share of phone numbers were inaccurate, highlighting data quality challenges.
Limitations and interpretation: The observational pilot lacks a randomized control, making causality uncertain; improvements could partly stem from concurrent government campaigns or better record keeping, but results align with existing evidence and plausible mechanisms.
Cost-effectiveness and future plans: Current estimates suggest the program is roughly 5x more cost-effective than unconditional cash transfers, with clear strategies to enroll broader populations, reduce costs (e.g., by shifting photo capture to health workers), pilot in new states, and conduct rigorous evaluations to strengthen causal claims and scale impact.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: Drawing from large-scale, high-quality studies, this evidence-based analysis argues that mental health is the most important modifiable factor for overall well-being, followed by the quality of romantic relationships—with strong emotional bonds, commitment, and low conflict predicting both happiness and stability—while physical health and income play smaller roles and compatibility remains hard to predict.
Key points:
Mental health is the strongest modifiable predictor of well-being, especially for life satisfaction and likely for affective (emotional) well-being, according to Clark et al. (2018) and broader clinical research.
High-quality romantic relationships are the next most influential factor, with satisfaction, perceived commitment, and low conflict being the best predictors of emotional well-being and relationship longevity (Joel et al., 2020; Hudson et al., 2020).
Sexual history—specifically having 9+ premarital sexual partners—strongly predicts divorce risk, with an effect size comparable to love and commitment (Smith & Wolfinger, 2023).
Breakups cause long-lasting emotional harm, with no evidence of full hedonic adaptation (Kettlewell et al., 2020), while marriage offers only a short-term emotional boost.
Predicting compatibility remains elusive, as personality and demographic traits add little beyond how someone perceives their relationship; models explain limited variance and fail to predict changes over time (Joel et al., 2020).
Other factors like physical health, income, and employment contribute modestly to well-being, while education and criminal history have minimal effects; many life events show only temporary emotional impact or modest cognitive effects.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: This exploratory post outlines a broad set of concrete research directions stemming from the “Gradual Disempowerment” (GD) paper, aiming to help others productively investigate how AI might diminish human influence over time and what strategies could prevent this—emphasizing breadth over depth and offering mentorship to those who take on the work.
Key points:
Integrated AI x-risk dynamics: The author encourages research into how GD interacts with other AI-related risks (like misalignment, coup risk, or recursive self-improvement), including mapping tradeoffs and exploring solution robustness across multiple failure modes.
Counterarguments and their assumptions: Several objections to GD—such as the strategy-stealing assumption, aligned AI interventions, or natural societal adaptations—deserve fuller exploration, ideally resulting in a fair synthesis of competing views.
Beyond competition: GD is not solely about competitive pressures; it also involves emergent influence patterns and internal dynamics that can lead to human disempowerment even in the absence of direct competition, warranting deeper conceptual analysis.
Describing and aiming for positive futures: Clarifying what “good outcomes” look like—both long-term and within the next few years—is a central priority, including discussions of paternalism, cultural evolution, and potential relationships between humans and AGI.
Social science and historical grounding: Suggested projects include reassessing the robustness of societal fundamentals (e.g., property rights, human agency) and drawing insights from historical transitions and technologies to better understand power dynamics and cultural shifts.
Indicators and practical policy levers: Developing measurable indicators for GD and actionable policy interventions—particularly short-term “red tape” solutions—is seen as a highly impactful yet currently neglected area.
Technical research areas: Promising directions include simulating civilizations, studying AI cognition and agency, formalizing civilizational alignment, and advancing differential empowerment mechanisms that support beneficial governance structures.
Complementarity over replacement: The post advocates for orienting AI development toward human-AI complementarity (e.g., cyborg evaluations, better interfaces), to avoid defaulting to a replacement paradigm that risks disempowering humans further.
Call to action with mentorship: The author offers personalized feedback to those who pursue these research directions, particularly encouraging undergraduates or early-career thinkers to engage with low-barrier entry points.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: This post argues that the AI governance field suffers from a surplus of abstract research and a shortage of advocacy, leading to a backlog of promising but unused policy ideas (“orphaned policies”); to remedy this, the author recommends that researchers make their work more actionable by drafting concrete policy documents and adopting one of eleven specific, underdeveloped proposals detailed in the post.
Key points:
Advocacy bottleneck: The field has a researcher-to-advocate imbalance (~3:1), meaning many good AI policy ideas lack champions who can bring them to policymakers—creating a large backlog of “orphaned” proposals.
Drafting real policies is tractable and impactful: Researchers should draft actual legislative or governance documents, which are often shorter and more influential than academic papers, and easier for policymakers to act on.
Make white papers concrete and directive: Even without drafting full legislation, researchers can increase their work’s utility by including specific recommendations, estimates, and implementation details.
Legal and funding constraints are navigable: Despite 501(c)(3) limits on lobbying, researchers can still advocate specific policies if framed as nonpartisan analysis with a clear evidentiary basis.
Catalog of eleven underdeveloped ideas: The post outlines eleven “orphaned” policies—ranging from compute monitoring and AI insurance to visa reform and LAWS regulation—each with specific missing components that a researcher could fill in.
Call to action and teaser for next post: The author urges researchers to “adopt” a policy and help develop it for real-world use, and suggests broader institutional reforms (to be discussed in the final post) are needed to shift funding from research to advocacy.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: Open Philanthropy presents a quantitative framework for identifying promising vaccine R&D targets by analyzing disease burden and funding gaps, concluding that diseases like group A streptococcus, syphilis, and hepatitis C are particularly neglected relative to their projected future impact and merit greater philanthropic investment.
Key points:
Framework and main finding: Applying an importance-neglectedness framework to 84 infectious diseases reveals significant disparities—some high-burden diseases receive up to 10x less R&D funding than others, suggesting overlooked opportunities for impact.
Top neglected targets: Group A streptococcus, hepatitis C, and syphilis have among the lowest R&D funding per projected DALY in 2050 and no widely available vaccines, making them compelling candidates for philanthropic support.
Methodology and limitations: Estimates are based on GBD projections, G-FINDER funding data, and vaccine availability, with acknowledged gaps (e.g. data exclusions, inconsistent disease groupings, and uncertainty in burden forecasts).
Insights into specific diseases: Some high-burden diseases like malaria and TB already receive substantial funding but still lack effective vaccines; others, like hepatitis B, appear underfunded due to data exclusions (e.g. high-income country focus).
Grantmaking implications: This analysis has informed Open Phil’s vaccine grant portfolio and led to investments in other neglected areas such as hepatitis B cures, syphilis testing, and antivenom development.
Broader cautions: The authors emphasize that quantitative metrics alone are insufficient; tractability, implementation context, and expert judgment are essential for effective prioritization.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: This exploratory post argues that humanity, as a “constitutional creature” constantly renegotiating how to live well together, now faces a pivotal “constitutional moment” due to AI’s transformative power—requiring urgent but careful reformation of societal norms to avoid catastrophe, preserve distributed power, and eventually construct a flourishing and adaptable future that allows for ongoing deliberation and persistent well-being.
Key points:
Three-stage vision for the future:The path to a flourishing future involves (1) navigating the urgent constitutional moment posed by AI, (2) constructing normative structures that balance freedom with constraints to avoid atrocity and enable flourishing, and (3) realizing a dynamic, ongoing form of utopia.
AI as a constitutional moment: The rise of transformative AI demands rapid normative renegotiation because failing to adapt would itself constitute a radical change. This moment requires caution to avoid locking in harmful structures or experiencing catastrophic failure.
Three challenges of the constitutional moment:
The pace of AI progress makes thoughtful deliberation difficult.
AI presents potential existential risks, including takeover or enabling catastrophic technologies.
AI could undermine distributed power, centralizing control and weakening democratic deliberation.
Post-crisis construction phase: If we successfully navigate the constitutional moment, we must then deliberately construct societal norms that preserve both human autonomy and collective flourishing, akin to Will MacAskill’s concept of “viatopia”—a stable state that makes dystopia unlikely and positive futures more reachable.
Utopia as process, not end-state: Rather than a fixed ideal, utopia is a self-sustaining process of bounded renegotiation—supporting persistent flourishing without collapsing into rigidity or chaos. Our current role is to pass the baton wisely, improving both the world and the tools for future deliberation.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: This emotionally urgent, exploratory blog post argues that extreme suffering—particularly that endured by farmed and wild animals—is unimaginably horrific, staggeringly widespread, and morally paramount, and that recognizing this should dramatically reshape our priorities and motivate donations to highly cost-effective interventions that reduce suffering.
Key points:
Extreme suffering is unimaginably horrific and morally weighty: The author urges readers to vividly imagine unbearable pain (e.g., boiling alive) to appreciate just how horrific such experiences are, and contends that they often eclipse all other moral considerations.
Most animals endure extreme suffering, especially in agriculture: Billions of farmed animals experience prolonged, intense pain akin to torture (e.g., hens enduring hundreds of hours of disabling pain or pigs being gassed or steamed to death), often under practices considered “humane.”
Even insects and small organisms may suffer intensely: Though insect consciousness is uncertain, evolutionary reasons suggest they might feel extreme pain, and their sheer number (~10^18) makes this a high-priority concern.
Suffering dominates most animals’ lives: The author argues that short lives ending in painful deaths (e.g., starvation, being crushed) likely mean most animals—especially invertebrates—have net-negative lives.
Moral seriousness demands action, not just intellectual reflection: The post critiques detached moral reasoning and insists that a genuine reckoning with suffering should provoke urgent, empathetic action.
Concrete recommendations for donations: The author recommends supporting the Shrimp Welfare Project, GiveWell, and two other unspecified causes as cost-effective ways to prevent extreme suffering, offering a subscription incentive for monthly donors.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: This exploratory guide compiles and organizes advice from EA-aligned and mainstream governance sources to help founders structure effective nonprofit boards, emphasizing clear roles, mission alignment, and board member capacity over prestige, while acknowledging uncertainty and variability in best practices.
Key points:
Clarify board structure and responsibilities early on — Founders should decide not only formal roles (e.g. Chair, Treasurer) but also the board’s intended function (e.g. governance vs advisory) and processes (e.g. agenda setting, CEO evaluation cadence).
Prioritize board members with time and relevant skills — Multiple sources warn against filling boards with high-status individuals who lack capacity, advocating instead for members who bring specific competencies and are willing to engage.
Define success and organizational trajectory — A clear, actionable vision for the nonprofit helps guide board composition and strategic decision-making; vague goals like “reduce AI x-risk” are insufficient.
CE oversight is a critical board duty — The most universally agreed-upon responsibility is hiring, evaluating, and if necessary, replacing the CEO, with recommendations for regular, structured assessments.
Legal and practical responsibilities require attention — Boards must comply with governance standards and may consider liability insurance, term limits, and clear voting protocols; practical templates and decision frameworks are provided.
Advisory structures and informal advisors can complement governance — Especially in early stages, having a mix of legal board members and informal but reliable advisors can balance risk management with flexibility and insight.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: This reflective comparison explores the relationship between Effective Altruism (EA) and the emerging Moral Ambition (SMA) movement founded by Rutger Bregman, noting both philosophical overlap and significant differences in culture, methodology, and emphasis, and offering guidance for EAs curious about SMA as a complementary or alternative community.
Key points:
Shared foundations but distinct emphases: Both EA and SMA promote consequentialist ethics, moral circle expansion, and impactful careers, but SMA prioritizes enthusiasm, action, and emotional intelligence over EA’s analytical rigor and focus on tradeoffs.
Cultural divergence around feedback and ambition: EA is rooted in critical evaluation and cost-effectiveness, often tolerating hard truths, while SMA emphasizes “Radical Kindness” and avoids hyper-rationalist or guilt-based approaches, aiming for inclusivity and emotional resonance.
Differing approaches to impact measurement: While both movements use variants of the ITN framework (SMA’s version is Sizable, Solvable, Sorely Neglected), SMA is more comfortable with qualitative reasoning and systemic change than EA, which tends to prioritize measurable, high-EV interventions.
Cause areas and collaboration potential: SMA’s focus includes causes like protein transition and anti-tobacco efforts, some of which overlap with EA interests and present opportunities for cooperation between the communities.
Strategic considerations and personal choice: The author suggests EAs who value rigor, philosophy, and longtermism may prefer EA, while those drawn to warmth, pluralism, and direct action may find SMA inspiring; she personally intends to engage with both.
Novelty and reputational dynamics: SMA’s distinct branding allows it to reach new audiences and sidestep some of EA’s reputational baggage, making it a potentially valuable and politically flexible ally in doing good.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: This exploratory and reflective post grapples with the tension between two clusters of values—dynamism and pragmatic power versus humility and virtuous restraint—arguing that while both are important, recent experiences and thinking have pushed the author to emphasize virtues like humility, cooperation, and pluralism, especially in navigating transformative technologies like AI, where locking in current preferences risks undermining long-term flourishing.
Key points:
Two value clusters are in tension: One emphasizes decisiveness, tradeoffs, and real-world impact (“rolling up sleeves”), while the other emphasizes humility, epistemic rigor, and wariness of power’s corrupting effects. The author has shifted more toward the latter, especially in the context of AI.
Power-seeking, even with good intentions, often warps judgment: Observations of altruistically motivated actors failing to use power wisely have increased the author’s skepticism of centralizing influence.
Virtue ethics as a delegation strategy: Virtue can be seen as a way of shaping future selves or agents, and focusing on internal character might prevent failures that arise from short-term, pragmatic consequentialism.
Dynamism versus stasis in AI governance: Drawing on thinkers like Helen Toner and Joe Carlsmith, the post warns that preventing catastrophic AI risks via top-down control could stifle experimentation, freedom, and the possibility of decentralized progress.
The importance of preserving “kernels” for future governance: Rather than locking in decisions now, we should aim to pass on values, tools, and structures that future, wiser generations can use to navigate challenges more effectively.
Wisdom longtermism over welfare longtermism: The author favors a focus on building toward a wiser, more empowered civilization—one that can better solve deep future challenges—rather than optimizing directly for current conceptions of welfare.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: This celebratory impact review shares updates from Charity Entrepreneurship’s incubated charities—spanning mental health, maternal health, education, early child health, policy, research, and animal welfare—highlighting promising cost-effectiveness, impressive reach, and future plans to scale and deepen their impact; the post is largely descriptive, with organizations providing their own progress snapshots.
Key points:
Strong early-stage impact and promising cost-effectiveness across sectors: Multiple charities report early results that suggest high cost-effectiveness—e.g., HealthLearn’s newborn care course is estimated to be 24× more cost-effective than cash transfers, Kaya Guides aims for 45 WELLBYs per $1,000 by 2026, and Lafiya Nigeria’s family planning work is modeled at up to 53× cash transfers.
Rapid scaling plans and strategic partnerships: Several charities are entering ambitious scale-up phases—e.g., Learning Alliance expects to expand from 15,000 to 40,000 students by 2026, Vida Plena plans to double its reach and government partnerships, and Healthy Futures is supporting a national rollout of syphilis testing in the Philippines.
Innovation in delivery models tailored to local contexts: Innovations include Kaya Guides’ WhatsApp-based therapy for rural Indians, Taimaka’s <$100 malnutrition treatment in Nigeria, and NOVAH’s IPV-prevention radio drama reaching tens of thousands of Rwandans.
Policy influence and systems change efforts underway: Organizations like Concentric Policies and Healthy Futures are engaging with governments to embed policy reforms (e.g., tax policy changes, syphilis screening mandates), while AMI supports contraceptive procurement legislation at the state level in Nigeria.
Animal welfare interventions scale in reach and sophistication: Shrimp Welfare Project and Fish Welfare Initiative report millions to billions of animals helped through humane slaughter, water quality, and density improvements, with growing focus on precision aquaculture and R&D for scalable interventions.
Meta and research-oriented projects show speculative but high-upside potential: CEARCH estimates a giving multiplier ≥10× GiveWell through cause area exploration and donor influence, while emphasizing the challenge of moving money toward identified opportunities.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: Open Philanthropy explains how it uses back-of-the-envelope calculations (BOTECs) to estimate the cost-effectiveness of grants across focus areas like global health, lead exposure reduction, animal welfare, and effective giving, illustrating their approach through detailed examples and emphasizing both the utility and limitations of these rough but decision-critical models.
Key points:
BOTECs clarify expected impact by estimating a grant’s social return on investment (SROI), helping Open Phil determine whether a grant clears its cost-effectiveness threshold — currently ~2,000x in “Open Phil dollars” for Global Health and Wellbeing grants.
The models vary by grant type — DALYs averted for health, suffering reduced for animals, or funds raised for effective charities — and may be forward- or backward-looking depending on available data and theory of change.
BOTECs guide but don’t dictate decisions; qualitative factors like leadership, track record, and unusual upside are also considered, and multiple BOTEC versions test the robustness of conclusions across different scenarios.
Examples illustrate application and nuance: A tuberculosis R&D grant modeled to avert nearly 20,000 deaths annually showed a 3,000x SROI; a lead detection method grant had an expected 6,500x SROI; an effective giving org cleared a 2x bar for fundraising ROI; and a broiler welfare campaign surpassed the animal welfare team’s separate bar.
Open Phil adjusts BOTECs over time as new information arises — for example, reassessing speedup timelines or success probabilities post-grant — and openly acknowledges uncertainties, estimation challenges, and speculative assumptions in modeling.
The post invites community feedback and aims to demystify Open Phil’s quantitative thinking, while signaling that BOTECs are one tool among many in a broader evaluative process.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
English version
Executive summary: This post presents the findings of a pilot survey on AI safety conducted in Yucatán, Mexico, revealing a strong local consensus for government regulation and the creation of a dedicated AI safety agency; the project, a collaborative academic initiative inspired by international methodologies, highlights the value of localizing global tools and calls for expanded research despite limitations in sample size.
Key points:
The pilot survey was adapted from the ESPAI 2023 study and the article “Thousands of AI Authors on the Future of AI,” focusing on ethical, regulatory, and social concerns about AI among students and researchers in Yucatán.
The study found strong support for government regulation and a specialized agency for AI safety in Mexico, with concerns about authoritarian misuse and socioeconomic inequality resonating particularly in the local context.
The project was initiated through an AI-Safety course and executed in collaboration with CentroGeo and the Universidad Politécnica de Yucatán, involving local students Valeria Ramírez and Janeth Valdivia.
Only 36 individuals participated, limiting the generalizability of the findings, though reviewers praised the initiative’s relevance and urged future expansion through broader sampling and policy-oriented outputs.
Janeth’s work was accepted at COMIA and will be published in a peer-reviewed journal; both her and Valeria’s contributions were recognized for methodological rigor and meaningful adaptation of global frameworks.
The authors emphasize the importance of academic collaboration and institutional support in developing advocacy-focused AI safety research in Latin America.
En españolResumen ejecutivo: Esta publicación presenta los resultados de una encuesta piloto sobre la seguridad en inteligencia artificial en Yucatán, revelando un consenso local en favor de la regulación gubernamental y la creación de una agencia especializada en México; el estudio, una iniciativa académica colaborativa inspirada en metodologías internacionales, resalta el valor de adaptar herramientas globales al contexto local y propone escalar la investigación pese a las limitaciones del tamaño muestral.
Puntos clave:
La encuesta piloto se basó en la ESPAI 2023 y en el artículo “Thousands of AI Authors on the Future of AI”, explorando preocupaciones éticas, regulatorias y sociales entre estudiantes e investigadores de Yucatán.
Los resultados mostraron un fuerte respaldo a la regulación gubernamental y la creación de una agencia dedicada a la seguridad de la IA, destacando preocupaciones específicas como el uso autoritario y la desigualdad socioeconómica.
El proyecto surgió a partir de un curso de AI Safety y se realizó en colaboración con CentroGeo y la Universidad Politécnica de Yucatán, con la participación destacada de Valeria Ramírez y Janeth Valdivia.
Aunque solo participaron 36 personas, lo que limita la generalización de los resultados, los revisores elogiaron la relevancia del estudio y sugirieron ampliarlo mediante mayor muestreo y productos dirigidos a la política pública.
El trabajo de Janeth fue aceptado en el COMIA y será publicado en una revista arbitrada; ambas autoras recibieron reconocimiento por su rigurosidad metodológica y por adaptar enfoques internacionales al contexto mexicano.
Se subraya el valor de la colaboración institucional y académica para fortalecer iniciativas de investigación y promoción en torno a la seguridad de la inteligencia artificial en América Latina.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: This personal reflection argues that AI “warning shots”—minor disasters that supposedly wake the public to AI risk—are unlikely to be effective without substantial prior public education and worldview-building, and warns against the dangerous fantasy that such events will effortlessly catalyze regulation or support for AI safety efforts.
Key points:
Hoping for warning shots is morally troubling and strategically flawed—wishing for disasters is misaligned with AI safety goals, and assumes falsely that such events will reliably provoke productive action.
Warning shots only work if the public already holds a conceptual framework to interpret them as meaningful AI risk signals; without this, confusion and misattribution are the default outcomes.
Historical “missed” warning shots (e.g., ChatGPT, deceptive alignment research, Turing Test surpassing) show that even experts struggle to agree on their significance, undermining their value as rallying events.
The most effective response is proactive worldview-building, not scenario prediction; preparing people to recognize and respond to diverse risks requires ongoing public education and advocacy.
PauseAI is presented as an accessible framework that communicates a basic, actionable AI risk worldview without requiring deep technical knowledge, helping people meaningfully respond even amid uncertainty.
The fantasy of cavalry via warning shots discourages the necessary grind of advocacy, but regulation (even if catalyzed by tragedy) ultimately relies on groundwork laid in advance—not just on crisis moments.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: This post argues that the AI governance community must urgently shift resources from research to advocacy, asserting that sufficient understanding already exists to support beneficial policies, and that delay—whether due to uncertainty, strategic caution, or capacity concerns—risks squandering a narrow and fast-closing window to meaningfully influence AI development before superintelligence or entrenched industry power makes regulation infeasible.
Key points:
Advocacy is more central than research for AI governance now: The core issue is not a lack of understanding but misaligned incentives for AI developers; fixing this requires political action, not further academic research.
Common objections to immediate advocacy are weak: Arguments that we lack robust policies, skilled advocates, or sufficient political influence underestimate our readiness and the diminishing returns of further delay.
Basic safety policies are clearly beneficial and feasible: Measures like audits, whistleblower protections, and liability mechanisms offer positive expected value and are analogized to historically successful safety interventions like seatbelts.
Regulation fears (e.g., backfiring, oligopoly, nationalization) are overstated: Thoughtfully crafted policies can avoid unintended consequences, and the post rebuts concerns around driving companies offshore, excessive cost burdens, regulatory capture, and governmental overreach.
Advocacy skills and infrastructure must be built through action: Effective political influence requires on-the-ground experience, relationship-building, and training—none of which can be developed passively through more research.
There’s no time to wait for ideal conditions: Given long legislative timelines and the accelerating pace of AI development, waiting for more favorable advocacy conditions may render future efforts moot as corporate power and technological capability outstrip regulatory leverage.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: This reflective memo reviews a year and a half of EA Forum event experiments—including Draft Amnesty Weeks, Debate Weeks, and Giving Seasons—finding that lightweight initiatives like Draft Amnesty are cost-effective for surfacing new content, while higher-effort events like Debate Weeks can successfully stimulate discourse but require careful framing and coordination; future iterations will likely double down on these learnings while adjusting based on engagement and quality tradeoffs.
Key points:
Draft Amnesty Weeks are low-cost and effective at generating Forum posts, encouraging new authors, and increasing engagement modestly; the 2025 event outperformed 2024, and the author plans to experiment with running them twice yearly.
Debate Weeks successfully foster deep discussion, especially when well-framed and accompanied by features like homepage slider polls and symposiums; however, selecting and wording debate topics remains challenging and time-intensive.
Animal Welfare vs Global Health Debate Week was the most successful across metrics—posts, karma, comments, and engagement—highlighting the potential of controversial but well-chosen topics, though some feedback flagged the framing as too combative.
Giving Season 2024 increased content volume significantly over 2023 (111 vs. 63 posts), but had lower total engagement and raised less in direct donations ($15K vs. $30K); however, the new ranked-choice Donation Election and public comment thread did increase discourse.
Marginal Funding Week saw strong growth, with 46 participating organizations in 2024 (up from 19), likely driven by requiring posts/comments for donation election eligibility.
The memo underscores tradeoffs between cost, content volume, and quality, advocating for iterating on successful formats (e.g. Draft Amnesty, Symposiums) while refining goals and expectations for each event type.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: This exploratory analysis argues that individuals in high-income countries could plausibly do up to a million times more good than the average donor by giving 50% of their income to the most cost-effective animal charities—particularly those reducing farmed animal suffering through alternative protein development—based on a hybrid moral theory (“mild welfarism”) and cost-effectiveness comparisons across charitable interventions.
Key points:
Mild welfarism—a hybrid ethical theory combining utilitarian and deontological principles—suggests we should maximize total welfare while respecting individuals’ rights not to be used as a means.
Survey-based welfare comparisons indicate that the suffering of farmed animals like broiler chickens can outweigh typical human welfare gains, making animal welfare interventions morally urgent.
Donating to cultivated meat R&D may spare at least 10 animals per euro, making it up to 100 times more cost-effective than general animal advocacy and 100×100 times more impactful than average charitable giving focused on human welfare.
GiveWell top charities are already ~100× more cost-effective than typical Western charitable efforts, especially in health, yet animal-focused charities could exceed this impact further.
Combining unusually high donation levels (e.g. 50% of income) with these extreme effectiveness differentials leads to the conclusion that one person could do a million times more good than average.
Recommended donation channels include Animal Charity Evaluators, the GWWC Animal Welfare Fund, and localized platforms like Effectief Geven and Doneer Effectief.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: This exploratory essay argues that John Rawls’ veil of ignorance, when interpreted through certain theories of personal identity, provides a realistic ethical framework that grounds sentientism—the moral relevance of all sentient beings—and helps resolve the is/ought problem by compelling compassion and evidence-based action toward reducing suffering universally.
Key points:
Rawls’ veil of ignorance, traditionally a thought experiment for justice among humans, gains deeper ethical significance if extended to all sentient beings by considering different theories of personal identity.
The author discusses three views on personal identity—Closed Individualism (the standard “one lifetime” self), Empty Individualism (consciousness as discrete time-slices), and Open Individualism (all consciousness as one)—showing how each supports a broad ethical concern beyond oneself.
Under Closed and Empty Individualism, being “behind the veil” means we could be any sentient being, so rational self-interest encourages reducing suffering for all, since we might end up experiencing it ourselves.
Open Individualism implies an even stronger ethical stance, where caring for others is identical to caring for oneself, reinforcing universal compassion.
Sentientism, defined as prioritizing evidence, reason, and compassion for all conscious experiences, provides a compelling response to the is/ought problem by linking actual experiences of suffering (is) to the moral imperative to alleviate it (ought).
The essay clarifies that this framing is a conceptual and ethical map, not a literal metaphysical claim about souls or consciousness existing before birth, and highlights implications for individual and collective moral action, including AI alignment.
The author aims to establish sentientism as a grounded ethical framework, inviting further discussion and refinement, especially in relation to future AI ethics.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.