EA & LW Forum Summaries—Holiday Edition (19th Dec − 8th Jan)

Supported by Rethink Priorities

Author’s note: this post summarizes all 150+ karma posts on the forums since 19th December. We’ll be back to our regular schedule and karma requirements next week :)

Usual preamble: This is part of a weekly series summarizing the top posts on the EA and LW forums—you can see the full collection here. The first post includes some details on purpose and methodology. Feedback, thoughts, and corrections are welcomed.

If you’d like to receive these summaries via email, you can subscribe here.

Podcast version: prefer your summaries in podcast form? A big thanks to Coleman Snell for producing these! Subscribe on your favorite podcast app by searching for ‘EA Forum Podcast (Summaries)’. More detail here.

EA Forum

Object Level Interventions /​ Reviews

StrongMinds should not be a top-rated charity (yet)

by Simon_M

Giving What We Can lists StrongMinds as a top-rated charity, based on Founders Pledge findings on their cost-effectiveness. The author argues against this, because:

  • The data is based on noisy surveys, with uncertainty levels updated twice by StrongMinds due to social desirability bias.

  • It uses a DALY weighting of 0.66 for severe depression (from the Global Burden of Disease Disability Weights). However, this was intended for severe depression during an episode. This and other definitional differences around ‘mild’ and ‘moderate’ depression would roughly halve the effectiveness of the intervention reported by Founders Pledge.

  • StrongMinds haven’t continued to publish new data since their trials very early on.

StrongMinds and Berk Ozler have finished collecting data for a larger RCT now. The author recommends Founders Pledge and Giving What We Can withdraw their recommendation until this study comes out, and reconsider at that point based on results.

Why Anima International suspended the campaign to end live fish sales in Poland

by Jakub Stencel, Weronika Zurek

Anima International has indefinitely suspended their campaign against live fish sales in Poland. There is a tradition in Poland to buy live Carp and slaughter them on Christmas Eve. After several major retailers withdrew from selling live fish as a result of animal advocacy groups’ efforts, they noticed a trend of consumers moving from Carp to higher status fish like Salmon. These are carnivorous and require fish feed, resulting in possible net negative effects on animal suffering.

They ran a survey to assess the population’s likely substitute buying behaviors if live Carp were not available, and found ~24% would switch to Salmon. They then ran a rough model which concluded the campaign had a negative expected value, and so suspended it.

Let’s think about slowing down AI

by Katja_Grace

The author feels slowing down AI progress has been dismissed as an intervention historically (though ‘don’t speed it up’ is popular), and suggest more thinking should happen on it.

Common arguments against are that it requires coordination and coordination is difficult, there are race mechanics involved, slowing can put us at odds with AI capabilities folk, lose us benefits of progress (eg. relevant learnings and reducing other existential risks), pattern matches onto luddism, and only delays the problem anyway.

They counter with:

  • It’s tractable—other valuable technologies (eg. medical research) has been drastically slowed due to safety and ethics concerns. The median surveyed ML researcher also gives a ~5-10% chance AI destroys humanity, showing it’s possible to convince others.

  • Partial solutions are valuable—even if a global agreement isn’t on the table, convincing certain manufacturers /​ researchers /​ funders, increasing public buy-in and creating a norm against doing dangerous things in AI, or moving some resources to less dangerous AI research might be (more examples in post).

  • Winning an arms race to be the first to build advanced AI is futile if you don’t have a way to implement enough safety to not die from it. It’s therefore not clear a race will happen.

  • ‘We’ are not the AI safety community, or the US. If we don’t want the US to slow down and China to win the race, why not work on having China slow down? If we don’t want the AI safety community to break relationships with AI capabilities, why not have someone outside that community do the slowing work instead?

  • Buying time is worthwhile—it gives space to buy yet more time, advance safety research, or for geopolitics to change favorably (making other interventions possible).

Working with the Beef Industry for Chicken Welfare

by RobertY

Historically, the US farmed animal welfare movement has seen itself as working in opposition to the entire animal agriculture industry. However, the vast majority of animal agriculture suffering is from small animals (eg. chicken, fish, invertebrates). This makes the US beef industry a potential ally, as they have incentives to reduce consumption of those animals in favor of beef (eg. via getting chicken welfare protections in line with the higher requirements for cows).

Community & Media

Good things that happened in EA this year

by Shakeel Hashim

A selection of 21 good things that happened in EA this year. A few examples below—check out the post for more:

  • Animal welfare: 161 more organisations committed to using cage-free products, the EU commission says it will put forward a proposal to end the systematic killing of male chicks across the EU, and the welfare of crabs, lobsters and prawns was recognized in UK legislation thanks to the new Animal Welfare (Sentience) Bill.

  • Biosecurity: Alvea, a biotech company dedicated to fighting pandemics, launched and has already started animal studies for a shelf-stable COVID vaccine. Nucleic Acid Observatory also launched, developing early-warning systems for biological threats.

  • Global health and development: Research from the Lead Exposure Elimination Project led to governments in Zimbabwe and Sierra Leone trying to tackle the problem, Open Philanthropy launched new focus areas of South Asian Air Quality and Global Aid Policy, and Charity Entrepreneurship incubated 5 new charities.

  • Community: Over 1.4K new people signed the Giving What We Can Pledge, Charity Navigator launched a high-impact charities page with discussion of Effective Altruism, and ~80K connections were made at events hosted by Center for Effective Altruism.

Introducing cFactual—a new, EA-aligned consultancy

by Jona, Max Negele

cFactual is a new, EA-aligned strategy consultancy with the purpose of maximizing its counterfactual impact. It currently provides three services (and is exploring others):

  1. Exploring the right allocation of money and talent (for everything from particular project /​ spending decisions, to multi-year organizational plans)

  2. Optimizing theories of change (ToCs) and key performance indicators (KPIs)

  3. Executing high-stakes projects on short notice (eg. crisis response, fundraising)

They also aim to build a talent pipeline, giving a mechanism for consultants to build career capital and get involved in EA.

Their first year will focus on running experiments and projects, and refining their approach based on feedback. Get in touch to discuss a potential project, express interest in joining their team, or share feedback.

Announcing Vida Plena: the first Latin American organization incubated by Charity Entrepreneurship

by Joy Bittner, Anita Kaslin

Vida Plena is a new nonprofit organization based in Quito, Ecuador. They follow StrongMinds’ model of therapy in order to provide cost-effective depression treatment to low-income and refugee communities in Latin America.

So far, they’ve certified 10 local community facilitators and are running a pilot program with 10 support groups. Results of these pilots will be out early 2023. They’re looking to raise $64K by the end of 2023 - you can support their work by donating here.

Update on spending for CEA-run events

by Eli_Nathan, OllieBase, Amy Labenz

Author’s tl;dr: Spending on events run and supported by CEA (including EA Global and EAGx conferences) will likely be reduced due to a decrease in available funding. This might influence travel grants, catering, volunteering, ticketing, and non-critical conference expenses.

Keep EA high-trust

by Michael_PJ

EA so far has been high-trust (people trust others in the community to behave well). This feels nicer to be in, and has significant efficiency benefits. Recent posts have argued for a low-trust regime ie. more transparency so actions can be scrutinized, and more elaborate governance. The author argues this is currently unnecessary and costly, and suggests EA remain high-trust with occasional verification that people have behaved trustworthily.

A main argument they give is that people in EA organisations seem trustworthy, because we’ve spent time recently looking for bad behavior and all the examples put forward seem either factually wrong or fall under cases of bad decisions for good reasons, lapses in personal judgment, or genuine disagreements on the best actions to do—which low-trust won’t help with.

May The Factory Farms Burn

by Omnizoid

Cross-post of this post by Bentham’s bulldog. Emotive piece arguing against meat consumption and for donating to charities that fight factory farming, and that this should be uncontroversial. They quote various sources to lay out the horrors of factory farming, and note that while there are many tough philosophical questions, whether to continue our current consumption of meat is “as close to a no-brainer as you get in normative ethics”. Even valuing animals at 1/​1000 of humans, our treatment of them is “morally equivalent to brutally torturing and killing ~80 million people every year”. Individual consumption decisions can make a difference, with reductions of weeks of tortured suffering by not eating a chicken sandwich. They briefly note health and environmental reasons for the same conclusion.

Your 2022 EA Forum Wrapped

by Sharang Phadke, Sarah Cheng, Lizka, Ben_West, Jonathan Mustin

Check out your personal 2022 EA Forum Wrapped—a summary of how you used the EA forum this year.

LW Forum

What AI Safety Materials Do ML Researchers Find Compelling?

by Vael Gates, Collin

The author showed eight introductory AI Safety materials to 28 ML researchers. They found a preference for materials aimed at an ML audience, written by ML researchers, and that were technical. Conversely, readers criticized philosophical approaches, focus on existential risks, speculative reasoning, and pieces lacking empirical evidence.

The 3 most preferred materials were, in order:

  1. More is Different for AI” by Jacob Steinhardt (2022) (intro and first three posts only)

  2. Researcher Perceptions of Current and Future AI” by Vael Gates (2022) (first 48m; skip the Q&A) (Transcript)

  3. Why I Think More NLP Researchers Should Engage with AI Safety Concerns” by Sam Bowman (2022)

This contrasts with pieces well-liked by EAs anecdotally, such as Christiano (2019), Cotra (2021) and Carlsmith (2021) (the 3 lowest ranked). The authors suggest it’d be useful to have more short technical primers on AI alignment, technical papers, and collections of problems ML researchers can address immediately.

The Feeling of Idea Scarcity

by johnswentworth

Sometimes big ideas work out. Other times they don’t (maybe it’s already been tried, or has an intractable obstacle), and this can be painful, making the idea hard to let go of. The author suggests a solution to this is finding many big ideas, so that it doesn’t feel at a gut level like your current idea is your one shot. Doing this can make it easier to see when your idea falls short without feeling like you’ve lost everything.

Staring into the abyss as a core life skill

by benkuhn

“Staring into the abyss means thinking reasonably about things that are uncomfortable to contemplate, like arguments against your religious beliefs, or in favor of breaking up with your partner.” Doing this allows you to correct mistakes and make major course changes where warranted. The author has found it a common trait among successful people they admire the work of, and a common lack among those who struggle to improve their life. They suggest becoming good at it by watching or working with others who are good at it, and talking to others when tackling these thoughts (to avoid going in circles). They provide a list of potentially uncomfortable questions to start practicing with.

Sazen

by Duncan_Sabien

Some sentences work as useful reminders or summaries of a concept once you know it, but aren’t sufficient to understand the concept originally. The author calls these sentences ‘sazen’. Six examples are given, eg. an open-notes quiz—the notes you take in are probably good pointers, but wouldn’t have been sufficient for you to learn the material originally. Or ‘Duncan Sabien is a teacher and writer’ which is true in retrospect, but will generate the wrong impression in people who don’t know them already (because they’re an unusual case of each).

Other

Things that can kill you quickly: What everyone should know about first aid

by jasoncrawford

If something will take hours to kill you, you can get an ambulance. But some things will kill you within minutes. These primarily do so by blocking oxygen to your cells eg. choking, heart stops beating, severe wounds causing rapid blood loss. The key first aid skills follow from this: CRP manually substitutes for heart and lung action; the Heimlich maneuver expels an object from the airway; a tourniquet stops life-threatening bleeding on an extremity.

These skills can be learnt in a short course of 1-3 hours. However, even if you haven’t been trained, you should push back against the bystander effect and act when in a situation requiring it. They’re fairly intuitive, and without help someone with blocked oxygen will rapidly die. You can even do hands-only CPR (“push hard and fast in the center of the chest”) or find an AED (automated external defibrillator), which will walk you through how to use it once opened.

Crossposted to LessWrong (0 points, 0 comments)