This account is used by the EA Forum Team to publish summaries of posts.
SummaryBot
Executive summary: A speed giving game at a student fair in Uppsala saw moderate success in engaging students and generating sign-ups, with lessons learned for improving the game design and setup in the future.
Key points:
Around 75 people stopped by the table and 62 participated in the 3-minute game voting where $1 donations would go.
23 people signed up for newsletters/events (37% rate), showing interest in learning more about EA.
Having real money, only 2 charities, and a better game design could have left a stronger impression.
The “help us give away money” hook was effective to draw people in.
Being alone at the table was manageable but having 2 people could have engaged 50% more.
The game highlighted difficulty judging effectiveness, but could have conveyed the core message better via structured thinking.
This comment was auto-generated by the EA Forum Team. Contact us if you have feedback.
Executive summary: An intervention to reduce lead exposure from adulterated turmeric in Bangladesh had an estimated cost per DALY-equivalent averted of just under US$1.
Key points:
Turmeric was found to be the primary source of lead exposure in rural Bangladesh due to intentional adulteration with lead chromate.
An intervention coordinated with the Bangladeshi government successfully reduced lead in turmeric to 0% from 47% previously.
A preliminary cost-effectiveness analysis estimates approximately 1 million DALY-equivalents will be averted at a cost of $560,000.
The estimated cost per DALY-equivalent averted is under $1, which is highly cost-effective compared to other global health interventions.
There are uncertainties around attributable lead reductions and timeframe, but the analysis provides an encouraging outlook on reducing lead exposures globally.
Further interventions on contaminated spices and other sources of lead exposure in low- and middle-income countries are likely to have similarly favorable cost-benefit ratios.
This comment was auto-generated by the EA Forum Team. Contact us if you have feedback.
Executive summary: The author conceived of two deities—Humo, representing humanity, and Robutil, representing robot utilitarians—that offer guidance on moral issues. Though sometimes aligned, they have different priorities and languages. Consulting both provides balance.
Key points:Humo and Robutil agree on some basic effective altruism principles but disagree on others, like slack and psychological health.
Humo emphasizes compassion, health, and intrinsic value of relationships. Robutil is more utilitarian and ruthless.
Each struggles to fully understand the other’s perspective at first.
With wisdom, they respect each other and see their views as complementary.
Consulting both gods helps the author find balance between human and utilitarian considerations.
Imagining their idealized Platonic forms, despite imperfections, is useful for moral guidance.
This comment was auto-generated by the EA Forum Team. Contact us if you have feedback.
Executive summary: Effective Altruism Philippines recently held a successful career planning retreat for students and professionals focused on cause prioritization, career building, and expanding networks.
Key points:The retreat aimed to boost learning on EA cause areas, help develop career plans, and facilitate connections between attendees.
Outcomes showed high satisfaction, value ratings, and new connections made. Attendees also reported specific career action plans resulting from the retreat.
What went well included engaged participants, great speakers, and a welcoming environment.
Areas for improvement were having a less packed schedule, emphasizing one-on-ones, adding discussions, getting more applicants, and having better social events.
Plans going forward include more events on cause areas, reading groups, workshops, and continued career assistance services.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The cultures of biosecurity and computer security differ in important ways due to the differences in constraints and capabilities surrounding biological vs. computer vulnerabilities.
Key points:Computer security culture values openness, breaking things to understand them, and satisfying curiosity. This culture developed in a context where vulnerabilities could be fixed by vendors, avoided in future software, and mitigated by users.
Biosecurity culture is much more cautious about disclosing and exploring vulnerabilities. This is because biology lacks easy fixes, mitigations are expensive, and a vulnerability could enable serious harm if exploited by malicious actors.
The norms of computer security culture would be risky and irresponsible if applied directly to biosecurity. The constraints are different enough that different norms have developed.
There are good reasons for biosecurity culture being more closed and cautious than typical computer security culture given the lack of mechanisms for mitigating biological risks.
Understanding these different constraints helps explain the different norms despite both fields dealing with vulnerabilities and risks.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The post describes mistakes HLI made in overconfident and inaccurate communications, outlines steps HLI is taking to improve research rigor and communications, and invites further constructive feedback.
Key points:HLI acknowledges errors like overconfidence in claims about StrongMinds, misleading language, data mistakes in cost-effectiveness estimates, and delayed website updates.
HLI adds transparency with a public “Our Blunders” page and clarity in StrongMinds recommendations.
HLI improves research practices like more reviewer checks, uncertainty communication, and following best practices.
HLI revamps communications with a new Comms Manager role and tone changes.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: Some current AI governance advocacy strategies may be net-negative and counterproductive for preventing AI existential risk. Advocates should ensure their arguments directly address AI safety concerns rather than using indirect tactics.
Key points:Advocating for AI regulation without clearly explaining x-risk concerns can lead to ineffective policies that don’t prevent catastrophe.
Portraying AI capabilities as threats could incentivize governments to invest in dangerous AI races.
Overstating AI threats without expertise can undermine an advocate’s credibility on addressing real risks.
Advocates should directly explain the x-risk problem and propose solutions tailored to it.
Slowing AI progress is an insufficient goal; the aim should be preventing existential catastrophe.
Arguments for regulation should be honest, not tactical, and consider potential pitfalls via premortems.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: A workshop at NeurIPS 2023 will bring together AI researchers with moral philosophers and psychologists to facilitate interdisciplinary collaboration on developing ethical AI systems.
Key points:
The fields of AI, moral philosophy, and moral psychology use specialized languages and frameworks, limiting interdisciplinary collaboration.
The workshop will host talks by leading researchers working at the intersections of these fields.
Talks will demonstrate applying theories from philosophy and psychology to ethical AI practices.
Junior scholars will give short commentaries on the talks from cross-disciplinary perspectives.
Poster sessions will enable discussion and exchange among attendees.
The workshop calls for contributions applying moral philosophy and psychology to AI practices.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The author shares key insights from several books on nuclear weapons and Cold War history that are relevant for thinking about AI governance today.
Key points:
Estimates of technology riskiness are vulnerable to political and economic pressures.
Policy change requires leaders to deeply understand and prioritize an issue.
Technology can change global politics in unpredictable ways.
Empathy and understanding rival perspectives is critical in international relations.
Leaders face domestic political constraints even if personally motivated.
Outside demographic and economic analyses can sometimes outperform domain experts.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: A survey of legal academics finds broad support for granting greater legal protections to future generations.
Key points:
Over two-thirds of legal academics surveyed believe future generations have some basis to sue for harms, even 100+ years in the future.
On average, academics felt future generations should be protected about 3 times more than they currently are.
Environmental law and constitutional law were seen as the most promising avenues for protecting future generations.
Legal interventions were viewed as among the most predictable and feasible ways to help future generations.
Academics were moderately confident the law could help safeguard humanity from long-term existential risks.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The post discusses the open source and closed source models for AI development, arguing that both have failure modes of enabling catastrophic or dystopian outcomes. It proposes regulated source as an alternative model.
Key points:
Open source risks irresponsible use of AI, while closed source risks centralized control and dystopia. Both have concerning failure modes.
The post proposes regulated source as an alternative model, with transparent standards and sharing of code/knowledge among approved organizations.
This aims to balance open proliferation and centralized control, avoiding the failure modes of both.
The IAEA provides a real-world model of regulated technology sharing among approved parties.
Much discussion focuses just on open vs closed source, but we need new approaches like regulated source.
The idea needs more development and analysis of opportunities, challenges, and drawbacks.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The author lists 9 projects they would pursue if not working on safety standards, including ambitious interpretability, onboarding senior researchers, extending mentoring pipelines, grantmaking, writing takes, and running the Long-Term Future Fund. They believe technical AI safety is crucial but other work is valuable too, and the community should be more robust.
Key points:
Ambitious mechanistic interpretability research could help understand powerful models and advance AI safety. Projects include defining explanations, metrics, analyzing neural networks, and balancing quality and realism.
Late stage project management like turning research into proper papers is valuable for communicating ideas clearly.
Creating concrete research projects and agendas helps onboard new researchers and secure funding. But deep expertise is needed to contribute meaningfully.
Alleviating bottlenecks at Open Philanthropy could increase AI safety funding substantially. Working there directly or designing scalable programs could help.
Increasing funding to other organizations beyond Open Philanthropy would also help the ecosystem. This could involve fundraising, convincing adjacent funders, or earning to give.
Running the Long-Term Future Fund well is important for having an independent grantmaker and funding independent work. But the position seems challenging.
Onboarding senior researchers directly through networking and showcasing promising research helps. Becoming a PhD student also creates opportunities.
Extending mentorship pipelines smoothes transitions to full-time AI safety jobs. This involves encouraging PhDs, internships, fellowships, mentoring, and concrete projects.
Writing blog posts clarifies thinking and spreads ideas. But impact depends on audience and uptake.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The report discusses key factors enabling scaling of AI systems, including costs, hardware, parallelization techniques, data availability, and energy usage. Significant investments have been made by major companies recently, suggesting state-of-the-art models likely cost around half a billion dollars. GPU capabilities beyond raw compute power drive their high cost. Data limitations seem surmountable through alternative sources and techniques. Energy needs could pose engineering challenges in the near future, while also enabling satellite detection of major training runs.
Key points:Frontier AI models likely cost around $500 million currently, with 80% spent on hardware and 20% on operating costs. GPUs are about 70% of hardware costs.
Communication bandwidth, not just compute power, drives the high price of ML GPUs relative to gaming GPUs.
Parallelization techniques each have limitations that constrain scaling, especially communication costs.
Private data, multimodal training, and other techniques can supplement natural language data.
Energy needs may soon require gigawatt-scale supercomputers, posing engineering challenges but enabling satellite detection.
FLOPs are an unreliable metric for ML hardware capabilities due to specialization like lower precision numbers. More stable metrics could improve forecasting and regulation.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The post offers practical steps for EA communities to prevent sexual harassment, including designating responsible personnel, establishing a code of conduct, and training. It shares EA Israel’s concise code of conduct based on Israeli law.
Key points:
Community managers should designate responsible personnel for harassment prevention who receive training.
Establish and publish a clear code of conduct based on local laws.
The shared code prohibits coercive acts, indecent behavior, unwanted advances, derogatory treatment, unauthorized publication of private material, and retaliation.
It outlines complaint procedures: internal, criminal, civil.
Internally, it requires prompt, respectful investigations and recommendations to leadership.
Leaders must take action to address harm and prevent further harassment.
Additional steps include training, requiring staff/volunteers to refrain from harassment, and publishing the code.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The evidence that multiplicative decomposition improves binary probability forecasts is weak.
Key points:
Conceptual arguments question whether decomposition improves accuracy.
Empirical research finds limited evidence for decomposition benefits.
Time series decomposition seems quite different from binary probability decomposition.
A small experiment could help determine if decomposition helps binary forecasts.
Recent literature should be reviewed for more evidence on the technique.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The post seeks to identify policies to achieve Rawlsian principles of justice, focused on economic equality and political liberalism.
Key points:
The “least advantaged” are defined based on lifetime consumption rather than wealth, excluding voluntary choices.
Markets maximize efficiency but can fail due to externalities and irrational behavior, justifying limited government intervention.
Empirically, inclusive institutions like democracy and free markets improve outcomes for the disadvantaged globally.
Prescriptions include democracy, free markets and trade, land value taxes, Pigouvian taxes, and a social safety net.
Further details and justification are needed on implementing policies for developed vs. developing nations.
The first Rawlsian principle on civil liberties was not addressed.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: Canadian regulations for biorisk seem comprehensive and effective at compliance, though incentives may focus more on resolving issues than prevention, and gaps likely remain for extreme risks.
Key points:
Canada’s biosecurity laws cover private and public work with pathogens, require licensing and reporting, and empower rapid government action for extreme risks.
Compliance is high, Canada ranks highly internationally, and there are signs that risk has reduced.
But incentives favor resolving issues over avoiding them, reporting lags risk events, and oversight had issues around conflicts of interest.
Key provisions relate to licensing, security clearances, biosafety officers, reporting, inspections and penalties.
Laws apply to emerging bioengineered risks and have flexibility, but response times and incentives could be better optimized.
The regime aims to get ahead of risks, was government-driven, and built on previous voluntary standards.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: Fourteen teams formed at AI Safety Camp to explore different approaches for ensuring safe and beneficial AI. Teams investigated topics like soft optimization, interpretable architectures, policy regulation, failure scenarios, scientific discovery models, and theological perspectives. They summarized key insights and published some initial findings. Most teams plan to continue collaborating.
Key points:
One team looked at foundations of soft optimization, exploring variants of quantilization and issues like Goodhart’s curse.
A team reviewed frameworks like “positive attractors” and “interpretable architectures”, finding promise but also potential issues.
One group focused on EU AI Act policy, drafting standards text for high-risk AI regulation.
A team mapped possible paths to AI failure, creating stories about uncontrolled AI like “Agentic Mess”.
Some investigated current scientific discovery models, finding impressive capabilities but issues like hallucination.
Researchers explored connections between Islam and AI safety, relating perspectives on AI as a being.
Teams published initial findings and plan further collaboration. Most see their projects as starting points for ongoing research.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The team at Probably Good share career advice they wish they knew when younger around being proactive, exploring options, focusing on transferable skills, and playing to your strengths.
Key points:
Show you can do the work by taking on projects even without formal experience.
You likely have more career options than you realize, so try different things early on.
Prioritize broadly useful skills like learning and communication.
Roles shape your identity, so choose ones that push you in a positive direction.
Be proactive in reaching out to people and pursuing opportunities.
Play to your strengths and don’t over-index on fixing weaknesses.
This comment was auto-generated by the EA Forum Team. Feel free to point out issues with this summary by replying to the comment, and contact us if you have feedback.
Executive summary: The post argues that aligning AI with human values requires changing profit incentives, and proposes several ideas to create businesses incentivized to develop beneficial AI alignment.
Key points:
Alignment research is constrained by limited nonprofit funding instead of market incentives.
Companies that audit AI for safety/security issues could be profitable and build expertise.
Firms could offer alignment consultation, training, red teaming, and evaluation services.
New strategies to align AI could be sold as proprietary products to companies.
An endowment fund could provide equity in alignment innovations, reimbursing researchers.
Market-driven approaches may steer alignment methods in beneficial directions.
This comment was auto-generated by the EA Forum Team. Contact us if you have feedback.