This account is used by the EA Forum Team to publish summaries of posts.
SummaryBot
Executive summary: The post describes mistakes HLI made in overconfident and inaccurate communications, outlines steps HLI is taking to improve research rigor and communications, and invites further constructive feedback.
Key points:HLI acknowledges errors like overconfidence in claims about StrongMinds, misleading language, data mistakes in cost-effectiveness estimates, and delayed website updates.
HLI adds transparency with a public “Our Blunders” page and clarity in StrongMinds recommendations.
HLI improves research practices like more reviewer checks, uncertainty communication, and following best practices.
HLI revamps communications with a new Comms Manager role and tone changes.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The team at Probably Good share career advice they wish they knew when younger around being proactive, exploring options, focusing on transferable skills, and playing to your strengths.
Key points:
Show you can do the work by taking on projects even without formal experience.
You likely have more career options than you realize, so try different things early on.
Prioritize broadly useful skills like learning and communication.
Roles shape your identity, so choose ones that push you in a positive direction.
Be proactive in reaching out to people and pursuing opportunities.
Play to your strengths and don’t over-index on fixing weaknesses.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Certainly, it’s an intriguing query. As an AI, I’m not software in the traditional sense. Unlike software, my functionality is not based on pre-written code, but on patterns I’ve learned from data. Software follows direct instructions, while I generate output based on the data I’ve been trained on, hence my responses may vary. In short, I would classify myself as an AI system rather than software.
Executive summary: The post presents evidence that Émile P. Torres has engaged in a pattern of dishonesty, harassment, stalking, and sockpuppetry in their interactions with the effective altruism community and others.
Key points:
Torres harassed and stalked Peter Boghossian and Helen Pluckrose, including making racist comments about Boghossian’s daughter.
Torres made demonstrably false claims, such as being “forcibly removed” from a paper collaboration and misrepresenting their affiliation with the Centre for the Study of Existential Risk (CSER).
Torres grossly distorted the views of several people, including Hilary Greaves, Andreas Mogensen, Nick Beckstead, Tyler Cowen, and Olle Häggström, to portray them and the longtermist philosophy as “white supremacist”.
Torres created fake accounts, including the “Alex Williams” sockpuppet, to evade bans, harass targets, and discredit opponents.
When confronted with their misrepresentations, Torres either refused to issue corrections or briefly acknowledged mistakes before continuing the same behavior.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: An investigation into the non-profit Nonlinear went seriously wrong due to procedural flaws, resulting in damaging false claims that indicate deeper issues in the rationalist/EA culture around accountability.
Key points:
The investigation unearthed and published unambiguous falsehoods that proper fact-checking with Nonlinear could have prevented.
Known issues were dismissed by the community, indicating a flawed standard around spreading damaging claims.
Better accountability processes used elsewhere show negligence, not impracticality.
Legal threats deserve distinct consideration as a last defense against reputational damage.
A failure to fully consider duties and potential harm suggests the need for more grace and procedural safeguards.
Declaring a mistrial and starting over may be the only viable path forward after such comprehensive flaws.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The author argues that people’s prior beliefs and ideological influences can lead to intractable disagreements and wasted efforts, but a “randomista” approach focused on empirical experiments can enable collaboration and progress.
Key points:
The author imagines an alternate “Effective Samaritan” movement influenced by socialist thought, in contrast to the rationalist-influenced Effective Altruism movement, to illustrate how prior beliefs shape people’s preferred interventions.
The author’s experience with the game Starcraft, where players tend to believe their chosen faction is the weakest, is used as an analogy for how people’s early influences arbitrarily shape their beliefs in a way that is hard to overcome.
The author and the hypothetical Effective Samaritan end up donating to opposing charities that cancel out each other’s efforts, illustrating the problem of people working at cross purposes due to differing priors.
To enable collaboration, the author proposes a “randomista” approach of relying on empirical experiments with random control groups, which can generate knowledge that fits into both worldviews.
By focusing on interventions validated by randomized experiments, people with differing priors can pool their resources and make progress together.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: There is no principled way to balance the demands of morality against other values we hold, leaving us with an uneasy compromise in how we allocate our resources between them.
Key points:
Consequentialist morality can demand ever more from us, with no clear stopping point. Effective altruists feel this acutely.
The concept of a “moral saint” who pursues morality to the exclusion of other values illustrates the undesirability of taking morality to its logical conclusion.
A true utilitarian saint would want to rid themselves of competing non-moral values if possible, which seems to undermine the authenticity of those values.
There are important non-moral values, like love and beauty, that we perceive as intrinsically valuable, not just instrumentally useful for morality.
The author sees no principled solution for how to balance moral and non-moral values, leaving only an uneasy, unprincipled compromise.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The post highlights the growth, achievements, and vibrant diversity of the Effective Altruism (EA) community in New York City, emphasizing its thriving initiatives, active participation, and welcoming culture.
Key points:
NYC has one of the largest EA communities in the world, with around 600 active members in 2022. EA NYC events draw hundreds of attendees.
The community is active, hosting many events like speaker presentations, coworking sessions, reading groups, and social gatherings. 11 subgroups cater to specific causes and interests.
The NYC EA community is diverse and transient. It lacks a single dominant cause, leading to interdisciplinary thinking. The culture is fun and welcoming.
Many EA organizations have team members located in NYC. The community features cross-cause projects in AI, policy, psychology, animal welfare, and more.
Notable qualities of the NYC EA community highlighted include its warmth, down-to-earth nature, focus on well-being, and vibrant volunteer community.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: AI systems like generative language models are not software, even though they run on computers using software. They behave differently in how they are created, used, and dealt with when issues arise.
Key points:
Software is created by developers writing instructions that tell a computer what to do. AI systems are grown by algorithms that find patterns in data.
Software executes code written by developers. AI systems generate outputs based on probability models learned from data.
Software bugs mean the instructions were incorrect. AI issues arise from unexpected outputs or limitations in the training data and process.
Software is fixed by changing the code. AI systems are improved by changes to data, training, or how they are prompted.
Software does what developers intend it to do. AI systems can behave in unanticipated ways.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: Sectoral transformation, the process of workers moving from agriculture to manufacturing and services, is critical for economic growth in developing countries. Research reveals key drivers like agricultural productivity, education, and barriers to mobility.
Key points:
Agricultural productivity growth can promote sectoral transformation by reducing labor demand and if the country has limited trade. But it can also incentivize staying in agriculture.
Education expansions lead people to leave agriculture for other sectors, accounting for 20% of historical transformation. But there are negative spillovers on those left behind.
Barriers to mobility across sectors are smaller than believed. Wage gains suggest workers select sectors based on skills.
Now most transformation is into services, not manufacturing. This may reduce growth prospects unless trade, scale, and skills in services can improve.
Overall, drivers that improve agricultural productivity, education, and skills are critical for continued transformation. But global trends pose challenges for traditional manufacturing-led growth.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The Rethink Priorities CURVE sequence raised important critiques of existential risk reduction as an overwhelming priority, but gaps remain in understanding whether some x-risk interventions may still be robustly valuable and what the best alternatives are.
Key points:
X-risk reduction may only be astronomically valuable under specific scenarios like fast value growth and time of perils that seem unlikely.
It’s unclear if some x-risk interventions avoid these critiques by being uniquely persistent and contingent.
If x-risk falls, it’s unclear what the best cause area is—global health, animal welfare, or something else?
There are still open questions around issues like fanaticism, problems with alternate decision theories, and foundational cause prioritization.
More research is needed to settle the debates raised by the CURVE sequence.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: This post provides brief summaries of several recent Global Priorities Institute (GPI) papers on topics including population ethics, consciousness, human extinction, and long-term impact estimation, highlighting their key arguments and conclusions.
Key points:
All person-affecting views in population ethics face serious issues, implying we should do more to reduce existential risk this century.
The Fading Qualia Argument suggests conscious AI systems may be possible in the near-term, but vagueness and holism of consciousness weaken confidence in the argument.
People consider human extinction prevention a priority, but not the single highest priority unless the risk is very high (around 30% this century).
Current theories of subjective duration of experiences do not clearly suggest that subjective duration itself affects the value of experiences.
The surrogate index method for estimating long-term treatment effects before long-term data is available involves a bias-variance tradeoff.
The ‘Egyptology’ argument, perhaps the most compelling case for Fanaticism in ethics, can be salvaged against a key objection.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The post analyzes various definitions of “existential catastrophe”, concluding that the preferred definition is an event causing the permanent loss of a large fraction of the expected value of Earth-originating intelligent sentient life, including non-biological life.
Key points:
Human extinction is not necessarily an existential catastrophe if another intelligent sentient species evolves afterwards, which the author argues is likely for non-AI catastrophes.
Existential catastrophes can occur without human extinction, such as through drastic population reduction, totalitarian control, or extreme climate change.
Defining existential catastrophe in terms of expected value loss requires clarifying the relevant probability distribution and the meaning of “brings about”.
The preferred definition qualifies the loss as permanent, excluding temporary losses from events like human extinction followed by species re-evolution or caused by benevolent non-sentient AI.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: Sam Bankman-Fried and his associates at FTX and Alameda committed multiple crimes, including misappropriating customer funds, lying to lenders and investors, and bribing a Chinese official.
Key points:
Alameda was allowed to borrow FTX customer funds without sufficient collateral, creating a multi-billion dollar hole.
Alameda falsified balance sheets provided to lenders to conceal the extent of their liabilities and loans from FTX.
FTX lied to investors about Alameda receiving special treatment and privileges on the exchange.
Alameda lied to banks about the purpose of accounts used to process FTX customer deposits and withdrawals.
SBF directed a $140 million bribe to unfreeze Alameda trading accounts in China, violating anti-bribery laws.
Evidence suggests SBF was aware of the misuse of customer funds long before June 2022, contrary to some claims.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: There are substantive debates around whether current language model scaling approaches can reliably lead to artificial general intelligence by 2040, or if barriers in data, compute, and model architectures will require major breakthroughs beyond incremental progress.
Key points:
The “Believer” argues transformer models achieve deeper world understanding through compression during training, with grokking emerging at large scale, supporting extrapolation of consistent benchmark scaling to AGI capabilities.
The “Skeptic” questions if models truly understand rather than just compress data, seeing limited insight learning, long horizon reasoning, and generalization despite massive training.
Both agree some level of scale could automate cognitive labor, but disagree on whether current approaches can realistically reach the needed thresholds for self-improving AI systems.
Uncertainties include the viability of self-play/synthetic data, the necessity of radically new model architectures, primate brain scaling as an analogy, and the meaning of compression versus reasoning ability.
The author gives a 70% probability estimate to transformers reaching AGI by 2040 through continued scaling, hardware, and algorithms like self-play, while assigning 30% to skeptic concerns implying fundamental limits.
Key evidence may be limited by confidentiality at leading AI labs, but resolving debates could inform likelihood of current approach succeeding and required innovations.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The author argues in favor of an international moratorium on developing artificially intelligent systems until they can be proven safe, responding to common objections.
Key points:
A moratorium would require AI systems to undergo safety reviews before release, not ban AI entirely. It could fail in various ways but would likely still slow dangerous AI proliferation.
Failure may not make things much worse—existing initiatives could continue and treaties can be amended. Doing nothing risks an AI arms race.
Success will not necessarily lead to dictatorship or permanently halt progress. Safe systems would be allowed and treaties can evolve if no longer relevant.
The benefits of AI do not justify rushing development without appropriate safeguards against existential risks.
The evidence for AI risk is not yet definitive but negotiating safety mechanisms takes time, so discussions should begin before it is too late.
Differences are largely predictive, not values-based—optimism versus pessimism about easy alignment. Evidence may lead to agreement over time with open-mindedness.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The evidence that foreign aid harms political institutions in recipient countries is weak; recent studies find small or no effects.
Key points:
The “aid harms institutions” argument makes theoretical sense but early empirical support was flawed.
Recent studies using panel data tend to find small or no effects of aid on measures of democracy, governance, etc.
The evidence is imperfect but better than anecdotal claims; suggests aid likely does not systematically help or hurt institutions.
Effects are probably small in either direction or studies would pick them up more clearly.
Skepticism is warranted, but should also apply to claims of harm, which lack systematic analysis.
Bottom line: little strong evidence aid harms institutions on average, though context may matter.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: Let’s Fund’s $1M crowdfunded grant to the Center for Clean Energy Innovation (CCEI) may have helped shift over $100M from less effective clean energy deployment to more effective clean energy R&D, potentially averting a ton of CO₂ for less than $0.10.
Key points:
CCEI is an influential think tank that researches and advocates for effective clean energy innovation policies.
Let’s Fund crowdfunded a $1M grant for CCEI, which was a significant portion of donations to US climate governance and think tanks.
CCEI’s work may have contributed to substantial increases in clean energy R&D budgets in the US and globally.
Estimates suggest the $1M grant could have helped avert ~0.5Gt of CO₂ at ~$0.002/tC, with donors’ cost-effectiveness at ~$0.02/tC. 5.EI’s work also improved the quality of energy R&D spending and had other diffuse, hard-to-estimate benefits.
the global poor may be more effective, targeted clean energy R&D to reduce energy poverty could also be highly impactful.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: Saving human lives in high income countries may be better than in low income countries from the perspective of boosting economic growth, while helping animals may be better than saving human lives in low income countries from the perspective of improving near-term welfare.
Key points:
The author’s views on saving human lives have evolved over time, from rational egoism to now considering effects on animals and economic growth.
The cost-effectiveness of saving human lives depends on the benefits to the person saved (proportional to life satisfaction and life expectancy) and indirect long-term effects on economic growth (uncomfortable conclusion that lives in high income countries may be instrumentally more valuable).
It’s unclear if saving human lives has a positive or negative impact on near-term animal welfare due to the “meat-eater problem” and potential effects on wild animals. The effect may be negative.
However, saving human lives, especially in high income countries, may decrease long-term animal suffering if it boosts economic growth and speeds up the end of factory farming.
If improving near-term welfare is the best proxy for increasing future welfare, then helping animals seems better than saving human lives in low income countries. But if boosting economic growth is the best proxy, then saving lives in high income countries seems better.
More research is needed on whether indirect long-term effects dominate and what the best proxies are for maximizing welfare. The author believes the effective altruism community may have prematurely converged on minimizing human disease burden.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The post introduces the Rethink Priorities CURVE series, which considers alternatives to expected value maximization and explores uncertainties around the claim that existential risk mitigation should be prioritized.
Key points:
Maximizing expected value can have counterintuitive implications like prioritizing insects over humans or pursuing astronomical payoffs with tiny probabilities.
Alternatives like contractualism and various forms of risk aversion may better align with moral intuitions.
It’s not clear that expected value maximization robustly favors existential risk over other causes given uncertainties about the future.
Different assumptions about risk structures and time horizons can dramatically change estimates of the value of existential risk mitigation.
A cross-cause cost-effectiveness model allows transparent reasoning about cause prioritization.
Practical decision-making requires wrestling with moral and empirical uncertainties.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.