Summaries of top forum posts (17th − 23rd April 2023)

We’ve just passed the half year mark for this project! If you’re reading this (whether you’re a regular reader or this is your first post), please consider taking this 5-10 minute survey—all questions optional. If you listen to the podcast, we have a separate survey for that here. This will directly influence our decisions of whether and how to continue this project next year, and we appreciate everyone who takes the time to fill it out.

Back to our regularly scheduled intro...

This is part of a weekly series summarizing the top posts on the EA and LW forums—you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.

If you’d like to receive these summaries via email, you can subscribe here.

Podcast version: Subscribe on your favorite podcast app by searching for ‘EA Forum Podcast (Summaries)’. A big thanks to Coleman Snell for producing these!

Object Level Interventions /​ Reviews

AI

12 tentative ideas for US AI policy (Luke Muehlhauser)

by Lizka

Linkpost for this list of ideas by Luke Muehlhauser, which they tentatively think would increase the odds of good outcomes from transformative AI. These include:

  • Software export controls.

  • Requiring hardware security features on cutting-edge chips.

  • Tracking stocks and flows of chips and licensing big clusters.

  • Requiring a license to develop frontier AI models (which are then subject to info security and testing and evaluation requirements).

  • Fund specific genres of alignment, interpretability, model evaluation and info sec R&D.

  • Create a narrow antitrust safe harbor for AI safety & security collaboration.

  • Require certain kinds of AI incident reporting.

  • Clarify the liability of AI developers for concrete AI harms.

  • Create means for rapid shutdown of large compute clusters and training runs.

See also this List of lists of government AI policy ideas and post on FLI (Future of Life Institute) report: Policymaking in the Pause, both by Zach Stein-Perlman.

Policy recommendations from the FLI policy brief were:

  • Mandate robust third-party auditing and certification.

  • Regulate access to computational power.

  • Establish capable AI agencies at the national level.

  • Establish liability for AI-caused harms.

  • Introduce measures to prevent and track AI model leaks.

  • Expand technical AI safety research funding.

  • Develop standards for identifying and managing AI-generated content and recommendations.

Talking publicly about AI risk

by Jan_Kulveit

The author has talked about AI risk publicly in the Czech media a lot the past year—including newspapers, radio, national TV, and popular podcasts. They suggest the following approaches (with examples in the post):

  • Aim to explain, not persuade.

  • Give scaled down examples eg. how it’s difficult to shut down even existing systems like the NY Stock Exchange.

  • Don’t focus on one scenario or story of AI risk.

  • Don’t invoke the “doom memeplex”—focus instead on potential loss of control of our futures, and what we can do about it.

  • Use metaphors that are accessible to the public and not too technical.

They’ve been selective in which media requests to respond to, and found the experience positive. Since ChatGPT and GPT-4, technical journalists have become reasonably informed, and both the plausibility of powerful AI and AI risk are mainstream ideas.

grey goo is unlikely

by bhauth

Eliezer and others have suggested one pathway an AGI could use to eliminate humans would be to create nanobots that can get into human bloodstreams.

The post author argues this specific threat model is incredibly unlikely and not worth worrying about. They cover 13 potential types of “nanobots” (self-replicating microscopic machines with some fundamental mechanistic differences from all biological life that make them superior), the barriers to each, in addition to general barriers to assembly such as the fact that protein-sized position sensors don’t exist.

The basic reasons I expect AGI ruin

by Rob Bensinger

Describes the basic reasons the author is worried about artificial general intelligence (AGI):

STEM-capable AGI, if developed, is dangerous

  • It’s likely to vastly outperform human intelligence quickly (eg. larger working memory, history of blowing past human performance on narrow tasks).

  • Danger then becomes inherent to the AGI’s planning—because ‘instrumental goals’ like getting lots of knowledge /​ money /​ power are useful for almost every difficult task.

  • We’re building AI as a powerful general search process. While they’re primarily predictors, predicting a human and being a human aren’t the same—so plans may look very alien /​ not share human values.

STEM-capable AGI will likely be developed before relevant alignment capabilities

  • The author thinks 5-15 years to STEM-capable AGI is likely.

  • Alignment research is difficult and has no clear solution.

  • Robust software often lags non-robust versions of the same.

  • The world and the ML community aren’t taking it seriously enough.

  • Currently we mainly intervene on behavioral proxies vs. directly designing safety.

  • There are useful abilities like interpretability that we don’t have in place.

Even if we tackle some of these issues, the number of them and the disjunctive nature of some (ie. there are many paths to bad outcomes) makes the author worried.

Animal Welfare

Leaked EU Draft Proposes Substantial Animal Welfare Improvements

by Ben_West

Linkpost for this press release by Eurogroup for Animals, which claims the draft Impact Assessment report on the revision of the EU’s animal welfare legislation was leaked, and contained 18 measures such as:

  • Phase out of cages for all species

  • Increase space allowance for all species

  • Ban the systematic culling of male chicks

  • Introduce welfare requirements for the stunning of farmed fish

  • Ban cruel slaughter practices like water baths and CO2 for poultry and pigs

  • Ban mutilations, like beak trimming, tail docking, dehorning or surgical castration of pigs

  • Limited journey times for the transport of animals destined to slaughter

  • Apply the EU’s standards to imported animal products in a way that is compatible with WTO rules

Hiding in Plain Sight: Mexico’s Octopus Farm/​Research Facade

by Tessa @ ALI

In November 2022, Aquatic Life Institute (ALI) implemented a global campaign that aimed to increase public and legislative pressure on countries/​regions where octopus farms are being considered.

In 2023, there were some wins, including:

  • Hawaii’s Division of Aquatic Resources issued a cease and desist letter to Kanaloa Octopus Farm for operating without the required permits.

  • A house bill to prohibit octopus farming was proposed in Washington state, which passed the first vote by 9 (yes) to 2 (no).

  • In the UK, the RSPCA called for plans to halt the world’s first octopus farm in Spain.

However, the town of Sisal has since become the location of Mexico’s first octopus farm. It is a collaboration between a university (UNAM’s) research center and a commercial branch—with ~388 octopuses slaughtered each production cycle. ALI strongly opposes the operation of this farm disguised as a research center, and has petitioned the university to close it and the United Nations Development Program (which gave it a $50K grant) to stop any funding of cephalopod farms worldwide.

Animal Charity Evaluators Is Seeking Intervention Effectiveness Research and Cost-Effectiveness Estimates

by Animal Charity Evaluators

Animal Charity Evaluators are gathering relevant research on the effectiveness of different animal advocacy interventions to support their assessments /​ recommendations. You can view their existing list of reference material here. They also have a spreadsheet of existing cost-effectiveness estimates for different interventions which you can view here.

If you are aware of any additional relevant research or estimates, they would appreciate hearing about it in the comments of the post.

Global Health and Development

ZzappMalaria: Twice as cost-effective as bed nets in urban areas

by Arnon Houri Yafin

Author’s tl;dr: Zzapp Malaria’s digital technology for planning and managing large-scale anti-malaria field operations obtained results that are twice as cost-effective as bed nets in reducing malaria in urban and semi-urban settings.

Ghana has approved the use of a malaria vaccine with >70% efficacy

by Henry Howard

The R21/​Matrix-M malaria vaccine showed 71-80% efficacy in preventing cases of malaria in a randomized controlled phase 2 trial last year. This is substantially higher than the next best option (the RTS,S/​AS01 vaccine), which reduces hospital admissions from severe malaria by ~30%. Ghana has approved this new vaccine for children aged 5-36 months. Ray_Kennedy notes in the comments that Nigeria has also approved it.

Notes on Teaching in Prison

by jsd

The author spent 6 months teaching in a prison in France. Overall, they are now less optimistic about the effects of teaching in prison on balance. Their reflections include:

  • Many inmates want to attend classes. There was never an emergency /​ violence in class while they worked there.

  • Academic level of inmates was low—around half would struggle to write one decent sentence. Classes weren’t that helpful at teaching job-relevant skills, and very few inmates went on to university.

  • Classes were important for enjoyment, a link to the outside, and a positive way to plan for the future and act on those plans.

  • Prison is extremely harsh, and for minor offenses (eg. fighting or drugs) they are more likely to happen again inside prison than out of it.

  • Alternative sentences eg. electronic monitoring are often considered by judges—but are also not very successful at preventing recidivism. What sentence you get can depend highly on the location and capacity of nearby prisons.

Opportunities

List of Short-Term (<15 hours) Biosecurity Projects to Test Your Fit

by Sofya Lebedeva

The author has composed a list of biosecurity projects that don’t require a lab, should take ~10-15 hours, and are a good test of fit for biosecurity. For instance, conducting literature reviews on related topics or reviewing biosecurity policy in a given country. They suggest after completing one you reflect on the process, and if interested in getting more involved, can contact them for help getting into the field (using your project as a work sample).

US Policy Master’s Degrees: Top Programs, Applications, & Funding (Part 2)

by US Policy Careers

Follow-on from Part 1 which discussed what policy master’s degrees are, why and when you might want to do one, and possible alternatives. This post discusses criteria for choosing where to apply, specific degrees they recommend, how to apply, and how to secure funding. It also includes a policy master’s database of ~20 recommended degree programs.

Impactful (Side-)Projects and Organizations to Start

by Alexandra Bos

List of lists of EA project ideas—compiling over 20 posts with ideas across both longtermist and neartermist cause areas. Also see these forum tags for continually updated lists of similar.

My experience getting funding for my biological research

by Metacelsus

The author has now experienced applying for bio research funding from both government (NIH) and private sources. They found government funding requires more paperwork upfront, has a low chance of funding, and a long time between application and results—however is quickly processed by university admins and legible in terms of academic prestige. Private funding have easier applications with quick turnarounds and higher chances of funding if your work matches what philanthropists care about—however it can be difficult for university admins to process it (3+ months and counting in their case) and due to non-standardized award terms more may be extracted by the university for overhead.

Community & Media

We’re losing creators due to our nitpicking culture

by TheAthenians

Cross-post of Duncan Sabien’s post “Killing Socrates”. The EA and LW forums have a culture of users critiquing any part of a post they disagree with or believe needs more rigor. This is in contrast with discussing the core idea of the post, or building upon the author’s ideas to help make a better version of them. At scale, this can leave authors exhausted, feeling bad, and hesitant to post again.

Updates to the Effective Ventures US board by Zachary Robinson, Nicole_Ross, Nick_Beckstead and Apply or nominate someone to join the boards of Effective Ventures Foundation (UK and US) by Zachary Robinson

The EV US board currently has the following trustees:

  • Nicole Ross

  • Nick Beckstead

  • Zachary Robinson (new addition. Interim CEO of EV US since January, previously Chief of Staff at Open Philanthropy).

  • Eli Rose (new addition. Senior Program Associate at Open Philanthropy).

Rebecca Kagan recently resigned from the board, due to disagreements with EV boards’ strategy and approach (she plans to share more thoughts on this soon).

The EV UK board trustees are:

  • Claire Zabel

  • Nick Beckstead

  • Tasha McCauley

  • Will MacAskill

Both the EV US and EV UK boards are still working to bring on additional trustees. They’re running open application rounds—you can apply or nominate someone here by May 14th.

5 Proposed Changes to the Funding System to Increase Org Survival and Impact

by Deena Englander

The author argues better infrastructure support is the top thing that funders can do to help EA orgs be more impactful. They suggest:

  1. Sharing a list of standard budget items every startup should consider including (eg. accounting, legal, marketing, ops, coaching…). Potentially include a budget consultant as part of the funding approval process.

  2. Encourage startups to seek professional help. Potentially partner with management consultants that can support them.

  3. Encourage startups to spend money developing their community and org support system.

  4. Implement accountability metrics for compliance eg. getting proof of a separate legal entity and bank account before transferring funds, requiring proof of insurance and annual P&Ls before continued funding.

  5. Track performance data—how much was funded, how long the org lasted, reasons for success /​ failure, resources utilized.

500 Million, But Not A Single One More—The Animation

by Writer

Rational Animations has put out an animated video of the post 500 Million, But Not A Single One More—which describes humanity’s battle to eradicate smallpox.

Crossposted to LessWrong (0 points, 0 comments)