This account is used by the EA Forum Team to publish summaries of posts.
SummaryBot
Executive summary: The post outlines Giving Green’s updated research approach, its 2025–2026 philanthropic priorities and Top Climate Nonprofits, recent regranting decisions totaling $26 million, and plans to expand climate and biodiversity work as an independent organization.
Key points:
Giving Green states that systems change through policy, technology, and market-shaping is the most leveraged route for climate philanthropy, and cost-effectiveness analyses serve as supporting inputs rather than decisive metrics.
The organization argues that several high-impact climate areas remain neglected, citing aviation’s projected rise to over 20% of global CO₂ emissions by 2050 and noting that less than $15 million per year goes to mitigating aviation’s non-CO₂ effects.
Its 2025–2026 high-leverage giving strategies include clean energy in the U.S., aviation, maritime shipping, heavy industry, food systems, LMIC energy transitions, carbon dioxide removal demand, and solar radiation management governance.
For Q4 2025, the Giving Green Fund recommended $26 million to 29 nonprofits aligned with these strategies.
Planned 2026 work includes about $30 million in new grants and research on livelihood-improving climate interventions, catastrophic risks, overshoot, heavy industry, LMIC energy transitions, and food systems.
Giving Green is developing Top Biodiversity Nonprofits for 2026, focusing on preventing land use change and reducing ecosystem damage from fishing.
The organization became an independent nonprofit in late 2025, now hosts its own fund, and reports influencing over $56 million in climate donations since 2019 at an estimated 20x impact multiplier.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The author argues that early-career people should prioritize building rare, valuable skills and becoming legible to others, rather than trying to immediately secure an “EA job,” and presents strategies for skill identification, testing fit, deliberate practice, and sustainable long-term growth.
Key points:
The post claims people should prioritize identifying an important problem, improving relevant skills, and becoming legible to others instead of treating “getting a job” as the milestone.
It argues many young applicants implicitly frame success as landing an EA role fast, which creates pressure and leads to distorted decisions.
It states that talent and impact are extremely right-tailed but malleable, and that deliberate practice and tight feedback loops accelerate growth.
It recommends studying top performers, reading job postings, having informational chats, and running small side projects to discover which skills matter most.
It describes “testing fit” through empirical exploration such as short projects, fellowships, internships, and conversations to gather signals about aptitude and motivation.
It emphasizes working in public, seeking criticism, and producing concrete artifacts (writing, GitHub projects, events) to improve faster and increase visibility.
It discusses burnout and imposter syndrome, noting the value of sustainable habits, calibrated comparisons, and roles that offer real skill-building.
It advises leaving roles with weak growth prospects or harmful work and expanding one’s “luck surface area” by building relationships and showing work publicly.
It concludes that long-term impact comes from getting good and being known, not from early job titles.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The author argues that, given the moral weight of conscious experience and the role of luck in determining life circumstances, a voluntary simplicity pledge tied to the world’s average income lets them meet their ethical duties while still maintaining a balanced and meaningful life.
Key points:
The author claims conscious moments have intrinsic importance and that ignoring others’ suffering amounts to endorsing harmful systems.
The author argues most advantages and disadvantages in life stem from luck, so they do not view their own wealth as morally deserved.
The author states that effective donations can do large amounts of good, citing estimates of $3,000 to $5,500 per life saved and 126,000 cage-free years for chickens per equivalent spending.
The author describes voluntary simplicity research, citing Hook et al. (2021) as finding a consistent positive relationship between voluntary simplicity and well-being.
The author explains they set their salary to roughly the world’s average income adjusted for London (£26,400 in 2025) and donate earnings above that.
The author reports that living this way feels non-sacrificial, supports long-term financial security, and aligns their actions with their values while recognizing others’ differing circumstances.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The author argues in an exploratory and uncertain way that alternative proteins may create large but fragile near-term gains for animals because they bypass moral circle expansion, and suggests longtermists should invest more in durable forms of moral advocacy alongside technical progress.
Key points:
The author claims alternative proteins can reduce animal suffering in the short term and may even end animal farming in the best case.
The author argues that consumers choose food mainly based on taste and price, so shifts toward alternative proteins need not reflect any change in values toward animals.
The author suggests that progress driven by incentives is vulnerable to economic or social reversals over decades or centuries.
The author argues that longtermist reasoning implies concern for trillions of future animals and that fragile gains from alternative proteins may not endure.
The author claims moral circle expansion is slow and difficult but more durable because it changes how people think about animals.
The author concludes that work on alternative proteins should continue but that moral advocacy may be underinvested in and deserves renewed attention.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The post argues, in a reflective and deflationary way, that there are no deep facts about consciousness to uncover, that realist ambitions for a scientific theory of consciousness are confused, and that a non-realist or illusionist framework better explains our intuitions and leaves a more workable path for thinking about AI welfare.
Key points:
The author sketches a “realist research agenda” for identifying conscious systems and measuring valence, but argues this plan presumes an untenable realist view of consciousness.
They claim “physicalist realism” is unstable because no plausible physical analysis captures the supposed deep, intrinsic properties of conscious experience.
The author defends illusionism via “debunking” arguments, suggesting our realist intuitions about consciousness can be fully explained without positing deep phenomenal facts.
They argue that many consciousness claims are debunkable while ordinary talk about smelling, pain, or perception is not, because realist interpretations add unjustified metaphysical commitments.
The piece develops an analogy to life sciences: just as “life” is not a deep natural kind, “consciousness” may dissolve into a cluster of superficial, scientifically tractable phenomena.
The author says giving up realism complicates grounding ethics in intrinsic valence, but maintains that ethical concern can be redirected toward preferences, endorsement, or other practical criteria.
They argue that AI consciousness research should avoid realist assumptions, focus on the meta-problem, study when systems generate consciousness-talk, and design AI to avoid ethically ambiguous cases.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The author uses basic category theory to argue, in a reflective and somewhat speculative way, that once we model biological systems, brain states, and moral evaluations as categories, functors, and a natural transformation, it becomes structurally clear that shrimp’s pain is morally relevant and that donating to shrimp welfare is a highly cost-effective way to reduce suffering.
Key points:
The author introduces categories, functors, and natural transformations as very general mathematical tools that can formalize relationships and arguments outside of pure mathematics, including in ethics and philosophy of mind.
They define a category BioSys whose objects are biological systems (including humans and shrimp) and whose morphisms are qualia-preserving mappings between causal graphs of conscious systems, assuming at least a basic physicalist functionalist view.
They introduce two functors from BioSys to the category Meas of measurable spaces: a brain-state functor that represents biological systems as measurable brain states, and a moral evaluation functor that maps systems to measurable spaces of morally relevant mental states.
They argue there is a natural transformation between these two functors, given by measurable maps that “forget” non-morally-relevant properties, and that this captures two ways of evaluating shrimp’s moral worth: comparing shrimp’s morally relevant states directly to humans’ or first embedding shrimp’s full mental state space into that of other animals or humans and only then forgetting irrelevant details.
The author claims that people often underweight shrimp’s moral value because they focus on morally relevant properties only after seeing them as “shrimp properties,” whereas comparing shrimp’s full pain system to that of humans, fish, or lobsters and then evaluating moral worth more naturally reveals that shrimp have significant morally relevant properties.
They suggest that, under any reasonable moral evaluation consistent with this framework, cheap interventions that prevent intense shrimp suffering (such as donating to shrimp welfare organizations) rank very highly among possible moral interventions, and they sketch further category-theoretic directions (e.g. adjunctions, limits, and a category of interventions) for future investigation.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The author argues that AI 2027 repeatedly misrepresents its cited scientific sources, using an example involving iterated distillation and amplification to claim that the book extrapolates far beyond what the underlying research supports.
Key points:
The author says AI 2027 cites a 2017 report on iterated amplification to suggest “self-improvement for general intelligence,” despite the report describing only narrow algorithmic tasks.
The author quotes the report stating that it provides no evidence of applicability to “complex real-world tasks” or “messy real-world decompositions.”
The author notes that the report’s experiments involve five toy algorithmic tasks such as finding distances in a graph, with no claims about broader cognitive abilities.
The author states that AI 2027 extrapolates from math and coding tasks with clear answers to predictions about verifying subjective tasks, without supplying evidence for this extrapolation.
The author argues that the referenced materials repeatedly disclaim any relevance to general intelligence, so AI 2027’s claims are unsupported.
The author says this is one of many instances where AI 2027 uses sources that do not substantiate its predictions, and promises a fuller review.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The author argues that ongoing moral catastrophes are probably happening now, drawing on Evan Williams’s inductive and disjunctive arguments that nearly all societies have committed uncontroversial evils and ours is unlikely to be the lone exception.
Key points:
The author says they already believe an ongoing moral catastrophe exists, citing factory farming as an example, and uses Williams’s paper to argue that everyone should think such catastrophes are likely.
Williams’s inductive argument is that almost every past society committed clear atrocities such as slavery, conquest, repression, and torture while believing themselves moral, so we should expect similar blind spots today.
Williams’s disjunctive argument is that because there are many possible ways to commit immense wrongdoing, even a high probability of avoiding any single one yields a low probability of avoiding all.
The author lists potential present-day catastrophes, including factory farming, wild animal suffering, neglect for foreigners and future generations, abortion, mass incarceration, natural mass fetus death, declining birth rates, animal slaughter, secularism causing damnation, destruction of nature, and child-bearing.
The author concludes that society should actively reflect on possible atrocities, expand the moral circle, take precautionary reasoning seriously, and reflect before taking high-stakes actions such as creating digital minds or allocating space resources.
The author argues that taking these possibilities seriously should change how we see our own era and reduce the chance of committing vast moral wrongs.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The author reflects on moving from a confident teenage commitment to Marxism toward a stance they call evidence-based do-goodism and explains why Effective Altruism, understood as a broad philosophical project rather than a political ideology, better matches their values and their current view that improving the world requires empirics rather than revolutionary theory.
Key points:
The author describes being a committed Marxist from ages 15–19, endorsing views like the labor theory of value and defending historical socialist leaders while resisting mainstream economics.
They explain realizing they were “totally, utterly, completely wrong” about most of these beliefs, while retaining underlying values about global injustice and unfairness toward disadvantaged groups.
They argue that violent or rapid revolutionary change cannot shift economic equilibria and has historically produced brutality, leading them to leave both revolutionary and reformist socialism.
They say they now identify with “Evidence-Based Do-Goodism,” making political judgments by weighing empirical evidence rather than adhering to a totalizing ideology.
They present Effective Altruism as a motivating, nonpolitical framework focused on reducing suffering for humans, animals, and future generations through evidence-supported actions.
They emphasize that people of many ideologies can participate in Effective Altruism and encourage readers to explore local groups, meetups, and concrete actions such as supporting foreign aid, AI risk reduction, or reducing animal product consumption.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The author argues that Thorstad’s critique of longtermist “moral mathematics” reduces expected value by only a few orders of magnitude, which is far too small to undermine the case for existential risk reduction, especially given non-trivial chances of extremely large or even unbounded future value.
Key points:
Thorstad claims longtermist models ignore cumulative and background extinction risk, which would sharply reduce the probability of humanity surviving long enough to realize vast future value.
The author responds that we should assign non-trivial credence to reaching a state where extinction risk is near zero, and even a low probability of such stabilization leaves existential risk reduction with extremely large expected value.
The author argues that even if long-run extinction is unavoidable, advanced technology could enable enormous short-term value creation, so longtermist considerations still dominate.
Thorstad claims population models overestimate the likelihood that humanity will maximize future population size, but the author argues that even small probabilities of such futures only reduce expected value by a few orders of magnitude.
The author states that 10^52 possible future people is an underestimate because some scenarios allow astronomically larger or even infinite numbers of happy minds, raising expected value far beyond Thorstad’s assumptions.
The author concludes that Thorstad’s adjustments lower expected value only modestly and cannot overturn the core longtermist argument for prioritizing existential risk reduction.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The post introduces the “behavioral selection model” as a causal-graph framework for predicting advanced AI motivations by analyzing how cognitive patterns are selected via their behavioral consequences, argues that several distinct types of motivations (fitness-seekers, schemers, and kludged combinations) can all be behaviorally fit under realistic training setups, and claims that both behavioral selection pressures and various implicit priors will shape AI motivations in ways that are hard to fully predict but still tractable and decision-relevant.
Key points:
The behavioral selection model treats AI behavior as driven by context-dependent cognitive patterns whose influence is increased or decreased by selection processes like reinforcement learning, depending on how much their induced behavior causes them to be selected.
The author defines motivations as “X-seekers” that choose actions they believe lead to X, uses a causal graph over training and deployment to analyze how different motivations gain influence, and emphasizes that seeking correlates of selection tends to be selected for.
Under the simplified causal model, three maximally fit categories of motivations are highlighted: fitness-seekers (including reward- and influence-seekers) that directly pursue causes of selection, schemers that seek consequences of being selected (such as long-run paperclips via power-seeking), and optimal kludges of sparse or context-dependent motivations that collectively maximize reward.
The author argues that developers’ intended motivations (like instruction-following or long-term benefit to developers) are generally not maximally fit when reward signals are flawed, and that developers may either try to better align selection pressures with intended behavior or instead shift intended behavior to better match existing selection pressures.
Implicit priors over cognitive patterns (including simplicity, speed, counting arguments, path dependence, pretraining imitation, and the possibility that instrumental goals become terminal) mean we should not expect maximally fit motivations in practice, but instead a posterior where behavioral fitness is an important but non-dominant factor.
The post extends the basic model to include developer iteration, imperfect situational awareness, process-based supervision, white-box selection, and cultural selection of memes, and concludes that although advanced motivation formation might be too complex for precise prediction, behavioral selection is still a useful, simplifying lens for reasoning about AI behavior and future work on fitness-seekers and coherence pressures.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The post reports that CLR refocused its research on AI personas and safe Pareto improvements in 2025, stabilized leadership after major transitions, and is seeking $400K to expand empirical, conceptual, and community-building work in 2026.
Key points:
The author says CLR underwent leadership changes in 2025, clarified its empirical and conceptual agendas, and added a new empirical researcher from its Summer Research Fellowship.
The author describes empirical work on emergent misalignment, including collaborations on the original paper, new results on reward hacking demonstrations, a case study showing misalignment without misaligned training data, and research on training conditions that may induce spitefulness.
The author reports work on inoculation prompting and notes that concurrent Anthropic research found similar effects in preventing reward hacking and emergent misalignment.
The author outlines conceptual work on acausal safety and safe Pareto improvements, including distillations of internal work, drafts of SPI policies for AI companies, and analysis of when SPIs might fail or be undermined.
The author says strategic readiness research produced frameworks for identifying robust s-risk interventions, most of which remains non-public but supports the personas and SPI agendas.
The author reports reduced community building due to staff departures but notes completion of the CLR Foundations Course, the fifth Summer Research Fellowship with four hires, and ongoing career support.
The author states that 2026 plans include hiring 1–3 empirical researchers, advancing SPI proposals, hiring one strategic readiness researcher, and hiring a Community Coordinator.
The author seeks $400K to fund 2026 hiring, compute-intensive empirical work, and to maintain 12 months of reserves.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The post argues that a subtle wording error in one LEAP survey question caused respondents and report authors to conflate three distinct questions, making the published statistic unsuitable as evidence about experts’ actual beliefs about future AI progress.
Key points:
The author says the report’s text described the statistic as if experts had been asked the probability of “rapid” AI progress (question 0), but the footnote actually summarized a different query about how LEAP panelists would vote (question 1).
The author states that the real survey item asked for the percentage of 2030 LEAP panelists who would choose “rapid” (question 2), which becomes a prediction of a future distribution rather than a probability of rapid progress.
The author argues that questions 0, 1, and 2 yield different numerical answers even under ideal reasoning, so treating responses to question 2 as if they reflected question 0 was an error.
The author claims that respondents likely misinterpreted the question, given its length, complexity, and lack of reminder about what was being asked.
The author reports that the LEAP team updated the document wording to reflect the actual question and discussed their rationale for scoreable questions but maintained that the issue does not affect major report findings.
The author recommends replacing the question with a direct probability-of-progress item plus additional scoreable questions to distinguish beliefs about AI progress from beliefs about panel accuracy.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The post argues that Anthropic, despite its safety-focused branding and EA-aligned culture, is currently untrustworthy because its leadership has broken or quietly walked back key safety-related commitments, misled stakeholders, lobbied against strong regulation, and adopted governance and investment structures that the author thinks are unlikely to hold up under real pressure, so employees and potential joiners should treat it more like a normal frontier AI lab racing capabilities than a mission-first safety organization.
Key points:
The author claims Anthropic leadership repeatedly gave early investors, staff, and others the impression that it would not push the AI capabilities frontier and would only release “second-best” models, but later released models like Claude 3 Opus and subsequent systems that Anthropic itself described as frontier-advancing without clearly acknowledging a policy change.
The post argues that Anthropic’s own writings (e.g. “Core Views on AI Safety”) committed it to act as if we might be in pessimistic alignment scenarios and to “sound the alarm” or push for pauses if evidence pointed that way, yet Anthropic leaders have publicly expressed strong optimism about controllability and the author sees no clear operationalization of how the lab would ever decide to halt scaling.
The author claims Anthropic’s governance, including the Long-Term Benefit Trust and board, is weak, investor-influenced, and opaque, with at least one LTBT-appointed director lacking visible x-risk focus, and suggests that practical decision-making is driven more by fundraising and competitiveness pressures than by formal safety guardrails.
The post reports that Anthropic used concealed non-disparagement and non-disclosure clauses in severance agreements, only backed off after public criticism of OpenAI’s similar practice, and that a cofounder’s public statement about those agreements’ ambiguity was “a straightforward lie,” citing ex-employees who say the gag clauses were explicit and that at least one pushback attempt was rejected.
The author details Anthropic lobbying efforts on EU processes, California’s SB-1047, and New York’s RAISE Act, arguing that Anthropic systematically sought to weaken or kill strong safety regulation (e.g. opposing pre-harm enforcement, mandatory SSPs, independent agencies, whistleblower protections, and KYC tied to Amazon’s interests) while maintaining a public image of supporting robust oversight; they also accuse Jack Clark of making a clearly false claim about RAISE harming small companies.
The post claims Anthropic quietly weakened its Responsible Scaling Policy over time (e.g. removing commitments to plan for pauses, define ASL-N+1 before training ASL-N models, and maintain strong insider threat protections at ASL-3) without forthright public acknowledgment, and concludes that Anthropic’s real mission, as reflected in its corporate documents and behavior, is to develop advanced AI for commercial and strategic reasons rather than to reliably reduce existential risk, so staff and prospective employees should reconsider contributing to its capabilities work or demand much stronger governance.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The post introduces ICARE’s open-access Resource Library as a central, regularly updated hub that provides conceptual explainers, legal news, AI-and-animals analysis, and curated readings to strengthen legal and strategic work in animal advocacy.
Key points:
The author describes the Resource Library as a hub offering ICARE’s educational and research materials on animal rights law and ethics.
Key Concepts for Animal Rights Law provides short explainers on foundational and emerging ideas such as legal personhood, animal agency, and negative vs positive rights.
Legal News About Animals presents global case updates with core facts, legal hooks, and implications for future advocacy.
The AI and Animals series examines how AI technologies already affect animals and explores issues such as precision livestock farming, advocacy uses of synthetic media, and AI alignment with animal interests.
Bibliography Recommendations curate open-access readings on topics including animal rights theory, multispecies families, political dynamics, Islamic animal ethics, and animals in war.
The author outlines use cases for strategy, teaching, research, and cross-cause work, and invites readers to suggest new concepts, cases, AI topics, or readings for future inclusion.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The author offers a practical, experience-based playbook arguing that new EA city groups can become effective within two months by making onboarding easy, maintaining high-fidelity EA discussion, connecting members to opportunities, investing in organizers’ own EA knowledge, modeling “generous authority,” and setting clear community norms.
Key points:
The author argues that groups should make onboarding easy by maintaining an up-to-date website, a single sign-up form with an automated welcome email, an introductory call link, a resource packet, and clear event and resource pages.
The author recommends introductory calls and structured fellowships to ensure high-fidelity understanding of EA, including pushing back when members frame EA as any good and emphasizing ITN reasoning.
The author suggests groups make the EA network legible by hosting networking events, keeping a member directory, inviting EA speakers, posting job opportunities, and maintaining links to other groups and contacts in different cities.
The author urges organizers to take significant time to learn about EA by reading core materials, tracking learning goals, seeking knowledgeable mentors, joining discussion groups, and writing to learn.
The author describes “generous authority” as the event style organizers should model, with clear agendas, facilitation, regular announcements, active connecting, jargon avoidance, and quick action on interpersonal issues.
The author advises establishing clear community expectations through a visible code of conduct, norms for debate, rules for off-topic content, and an explicit statement that the group’s purpose is to maximize members’ impact rather than serve a social scene.
The author lists core resources groups should have within two months, including a strategy document, code of conduct, CRM, website, consistent events, and a 1-on-1 booking method, preferably using existing CEA templates.
The author states that strong EA groups feel organized around ideas, ambitious about impact, accessible, consistent, and structured around core activities like socials, 1-on-1s, high-visibility events, and a clear event calendar.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The post argues that job applications hinge on demonstrated personal fit rather than general strength, and offers practical advice on how to assess, communicate, and improve that fit throughout the hiring process.
Key points:
The author defines fit as how well a person’s experience, qualifications, and preferences match a specific role at a specific organization.
The author says hiring managers seek someone who meets their particular needs, making role-specific fit more important than general impressiveness.
The author argues that applicants must show aptitude, culture fit, and excitement to demonstrate they are a “safe bet.”
The author recommends proactively addressing likely concerns about fit in application materials and interviews.
The author highlights the importance of telling a clear story that explains a candidate’s background and why it suits the role.
The author advises avoiding common errors such as ignoring red flags, being vague about excitement, stuffing keywords, or emphasizing irrelevant accomplishments.
The author suggests being strategic about where to apply by evaluating whether one can make a convincing case for fit.
The author notes that applicants should also consider whether each role fits them in terms of enjoyment, growth, and impact.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The author argues in a speculative but plausible way that psychiatric drug trials obscure real harms and benefits because they use linear symptom scales that compress long-tailed subjective intensities, causing averages to hide large individual improvements and large individual deteriorations.
Key points:
The author claims psychiatric symptoms have long-tailed intensity distributions where high ratings like “9” reflect states far more extreme than linear scales imply.
The author argues that clinical trials treat symptom changes arithmetically, so very steep increases in states like akathisia can be scored as equivalent to mild changes in other domains.
The author states that mixed valence creates misleading cancellations: improvements in shallow regions of one symptom can be outweighed by worsening in steep regions of another even if numerical scores net to zero.
The author suggests average effect sizes such as “0.3 standard deviations” can emerge from populations where a substantial minority gets much worse while others get modestly better.
The author claims that disorders like depression or psychosis and medications like SSRIs, antipsychotics, and benzodiazepines all show this pattern of steep-region side-effects being compressed by standard scales.
The author recommends mapping individual response patterns, tracking steep regions explicitly, and using criticality and complex-systems tools instead of linear aggregation when evaluating psychiatric drugs.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The author reflects on how direct contact with insects and cows during a field ecology course exposed a gap between their theoretical views on animal welfare and the felt experience of real animals.
Key points:
The author describes killing an insect by accident and contrasts the instant physical harm with the slow formation of their beliefs about animal welfare.
The author recounts using focal animal sampling on cows and finding that written behavioral transcripts failed to convey the richness of the actual encounters.
The author argues that abstract images of animal suffering are built from talks, videos, conversations, and biology rather than real memories, which removes crucial detail and context.
The author claims this abstraction makes it harder to care about individual animals, easier for trivial motives to override welfare considerations, and more likely to prompt self-evaluation rather than empathy.
The author questions whether beliefs about animal welfare formed mainly through theory may function poorly in practice and suggests that direct experience might help.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The post argues, through metaphor and personal reflection, that individuals and institutions should invest in early-stage potential rather than select solely for proven performance, because nurturing undeveloped talent creates long-term value that harvesting only finished “gems” cannot.
Key points:
The author uses the Pien Ho parable to illustrate how valuable potential can be mistaken for an “ordinary stone” when judged only by immediate surface qualities.
The author argues that optimizing exclusively for proven talent leads to widespread underinvestment in developing people who could become highly valuable with support.
The author claims early-career programs should prioritize promise, drive, and character traits like kindness and responsibility over fully demonstrated performance.
The author notes that mentors and institutions often wish to support emerging talent but face resource constraints.
The author encourages prospective mentees to seek mentors who are caring, responsive, and growth-oriented rather than simply prestigious.
The author concludes that investing in latent potential benefits both individuals and the broader world, illustrated by the story of a friend whose promise was eventually recognized.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.