Posts from 2023 you thought were valuable (and underrated)

I’m sharing:

  • a list of posts that were marked as “most valuable” by the most people (who marked posts as “most valuable” in Forum Wrapped 2023), and

  • a list of posts that were most underrated by karma relative to the number of “most valuable” votes.

These lists are not objective or “true” collections of the most valuable and underrated posts from 2023. Relatively few people marked posts as “most valuable,” and I imagine that those who did, didn’t do it very carefully or comprehensively. And there are various factors that would bias the results (like the fact that we ordered posts by upvotes and karma on the “Wrapped” page, people probably remember more recent posts more, etc.).

Consider commenting if there are other posts you would like to highlight!

This post is almost identical to last year’s post: Posts from 2022 you thought were valuable (or underrated).

An illustration, in part to generate a preview image for the post.

Which posts did the most Forum users think were “most valuable”?

Note that we ordered posts in “Wrapped” by your own votes, followed by karma score, meaning higher-karma posts probably got more “most valuable” votes.

”Most valuable” countAuthor(s)[1]Title
28@Peter Wildeford EA is three radical ideas I want to protect
28@Ariel Simnegar Open Phil Should Allocate Most Neartermist Funding to Animal Welfare
24@AGB 10 years of Earning to Give
14@Bob Fischer Rethink Priorities’ Welfare Range Estimates
13@Rockwell On Living Without Idols
12@Nick Whitaker The EA community does not own its donors’ money
11@Jakub Stencel EA’s success no one cares about
11@tmychow, @basil.halperin , @J. Zachary Mazlish AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years
10@Luke Freeman We can all help solve funding constraints. What stops us?
10@zdgroff How Long Do Policy Changes Matter? New Paper
9@kyle_fish Net global welfare may be negative and declining
9@ConcernedEAs Doing EA Better
7@Lucretia Why I Spoke to TIME Magazine, and My Experience as a Female AI Researcher in Silicon Valley
7@Michelle_Hutchinson Why I love effective altruism
7@JamesSnowden Why I don’t agree with HLI’s estimate of household spillovers from therapy
7@Ren Ryba Reminding myself just how awful pain can get (plus, an experiment on myself)
7@Amy Labenz EA is good, actually
7@Ben_West Third Wave Effective Altruism
6@Ben Pace Sharing Information About Nonlinear
6@Zachary Robinson EV updates: FTX settlement and the future of EV
6@NunoSempere My highly personal skepticism braindump on existential risk from artificial intelligence.
6@leopold Nobody’s on the ball on AGI alignment
6@saulius Why I No Longer Prioritize Wild Animal Welfare
6@ES Advice on communicating in and around the biosecurity policy community
6@Derek Shiller, @Bernardo Baron, @Chase Carter, @Agustín Covarrubias, @Marcus_A_Davis, @MichaelDickens, @Laura Duffy, @Peter Wildeford Rethink Priorities’ Cross-Cause Cost-Effectiveness Model: Introduction and Overview
6@Karthik Tadepalli What do we really know about growth in LMICs? (Part 1: sectoral transformation)
6@Nora Belrose AI Pause Will Likely Backfire

Which were most underrated by karma?

I looked at the number of people who had marked something as “most valuable,” and then divided by [karma score]^1.5. (This is what I did last year, too.[2]) We got more ratings this year, so my cutoff was at least three votes this year (vs. two last year).

”Most valuable” countAuthor(s)Title
3@RobBensinger erThe basic reasons I expect AGI ruin
3@Zach Stein-Perlman AI policy ideas: Reading list
3@JoelMcGuire, @Samuel Dupret, @Ryan Dwyer, @MichaelPlant, @mklapow, @Happier Lives InstituteTalking through depression: The cost-effectiveness of psychotherapy in LMICs, revised and expanded
4@Lukas_Gloor AI alignment researchers may have a comparative advantage in reducing s-risks
6@Nora Belrose AI Pause Will Likely Backfire
5@Joe_Carlsmith Predictable updating about AI risk
6@Karthik Tadepalli What do we really know about growth in LMICs? (Part 1: sectoral transformation)
4@Winston The option value argument doesn’t work when it’s most needed
4@Luise How I solved my problems with low energy (or: burnout)
5@Omega Critiques of prominent AI safety labs: Conjecture
28@Ariel Simnegar Open Phil Should Allocate Most Neartermist Funding to Animal Welfare
5Link-posted by @Pablo In Continued Defense Of Effective Altruism — Scott Alexander
14@Bob Fischer Rethink Priorities’ Welfare Range Estimates
3@salonium Why we didn’t get a malaria vaccine sooner, and what we can do better next time
10@zdgroff How Long Do Policy Changes Matter? New Paper
3@Rafael Ruiz PhD on Moral Progress—Bibliography Review
7@Ben_West Third Wave Effective Altruism
9@ConcernedEAs Doing EA Better
3@Joe_Carlsmith Seeing more whole
4@MathiasKB Two cheap ways to test your fit for policy work
3@Center on Long-Term Risk Beginner’s guide to reducing s-risks [link-post]
3@Lawrence Chan What I would do if I wasn’t at ARC Evals
3@Ozzie Gooen, @Slava Matyukhn Announcing Squiggle Hub
6@Derek Shiller, @Bernardo Baron, @Chase Carter, @Agustín Covarrubias, @Marcus_A_Davis, @MichaelDickens, @Laura Duffy, @Peter Wildeford Rethink Priorities’ Cross-Cause Cost-Effectiveness Model: Introduction and Overview
9@kyle_fish Net global welfare may be negative and declining
11@tmychow, @basil.halperin , @J. Zachary Mazlish AGI and the EMH: markets are not expecting aligned or unaligned AI in the next 30 years
5@Aidan Alexander, @CE Nailing the basics – Theories of change
28@Peter Wildeford EA is three radical ideas I want to protect
3@David_Althaus, @Ewelina_Tur Impact obsession: Feeling like you never do enough good
6@ES Advice on communicating in and around the biosecurity policy community

Some other places to find awesome posts (especially from 2023)

Consider sharing other posts you want to highlight in the comments here.

Thanks so much…

…for posting, commenting, upvoting, marking posts as “most valuable,” giving us feedback, and more!

(I’ll also flag that if you liked a post, you could tell the author! I think it’s especially gratifying if you explain why you found it valuable, and you can DM authors if you aren’t sure about leaving a public comment.)

  1. ^

    I decided to notify authors in case it’s nice to see this and because readers might want to explore other things they posted, but if you don’t like this, please let me know!

  2. ^

    From last year: “Just dividing by karma didn’t change the list much, and dividing by karma^2 penalized karma too much — the result was a bunch of posts that only had one “most valuable” mark that just had low karma. I played with a few other ways of modifying the “underrated-ness” metric, but they didn’t seem better.”

    No comments.