Your 2022 EA Forum Wrapped 🎁
The EA Forum team is excited to share your personal ✨ 2022 EA Forum Wrapped ✨. We hope you enjoy this little summary of how you used the EA Forum as you ring in the new year with us. Thanks for being part of the Forum!
Note: If you don’t have an EA Forum account, we won’t be able to make a personalized “wrapped” for you. If you feel like you’re missing out, today is a great day to make an account and participate more actively in the online EA community!
- Your EA Forum 2023 Wrapped by 30 Dec 2023 23:12 UTC; 89 points) (
- Posts from 2022 you thought were valuable (or underrated) by 17 Jan 2023 16:42 UTC; 85 points) (
- EA & LW Forum Summaries—Holiday Edition (19th Dec − 8th Jan) by 9 Jan 2023 21:06 UTC; 24 points) (
- EA & LW Forum Summaries—Holiday Edition (19th Dec − 8th Jan) by 9 Jan 2023 21:06 UTC; 11 points) (LessWrong;
Looks fun! Thanks for this. Curious about EA forum alignment methodology!
(also happy new year to the team, thanks for all your work on the forum!)
https://github.com/ForumMagnum/ForumMagnum/blob/5f08a68cfd2eb48d5a2286962cd70ddfea9a97a6/packages/lesswrong/server/resolvers/userResolvers.ts#L322-L339
I think it looks at engagement (I assume time spent on the Forum) and the comments/posts ratio.
Forum Team, maybe link “alignment” at https://forum.effectivealtruism.org/wrapped to this comment rather than the Wikipedia page? (If I’d been labeled “evil” I think I’d much rather be reassured that it’s for a completely irrelevant reason than linked to the D&D reference.)
For those who don’t program, this is what ChatGPT says this code means:
“This code defines a function
getAlignment
that takes in an object calledresults
and returns a string indicating a combination of good/evil and lawful/chaotic alignments.The function first initializes two variables,
goodEvil
andlawfulChaotic
, to the string ‘neutral’ and ‘Neutral’, respectively. It then checks theengagementPercentile
property of theresults
object. If the engagement percentile is less than 0.33,goodEvil
is set to the string ‘evil’. If the engagement percentile is greater than 0.66,goodEvil
is set to the string ‘good’.The function then calculates the ratio of
commentCount
topostCount
in theresults
object, and uses this ratio to set the value oflawfulChaotic
. If the ratio is less than 3,lawfulChaotic
is set to ‘Chaotic’. If the ratio is greater than 6,lawfulChaotic
is set to ‘Lawful’.Finally, the function checks if both
lawfulChaotic
andgoodEvil
are equal to ‘neutral’. If they are, the function returns the string ‘True neutral’. Otherwise, it returns the concatenation oflawfulChaotic
andgoodEvil
with a space in between.”I don’t code, so I have no idea if this is accurate, so please let me know if it’s off.
I think it’s accurate, but I don’t know if it’s clearer
Here’s a shitty table that I think is clearer
Actually, ChatGPT does a decent job at that
This is arguably better than my table:
No, I think your table is substantially better than chatgpt’s because it factors out the two alignment dimensions into two spatial dimensions.
I have to squint a lot to see the sense in this mapping
I don’t think it’s meant to be taken seriously, just some whimsical easter egg
I’m really glad we made this. :)
Scanning through my strong upvotes & upvotes, here are things I think are really valuable (not an exhaustive list![1]), grouped in very rough categories:
Assorted posts I thought were especially underrated
Most problems fall within a 100x tractability range (under certain assumptions)
Epistemic Legibility
Contra Ari Ne’eman On Effective Altruism
Butterfly Ideas
The case for becoming a black-box investigator of language models
Methods-ish or meta posts
Terminate deliberation based on resilience, not certainty
Comparative advantage does not mean doing the thing you’re best at
Effectiveness is a Conjunction of Multipliers
Nonprofit Boards are Weird
Learning By Writing
How effective are prizes at spurring innovation?
How to do theoretical research, a personal perspective
Methods for improving uncertainty analysis in EA cost-effectiveness models
An experiment eliciting relative estimates for Open Philanthropy’s 2018 AI safety grants
Let’s stop saying ‘funding overhang’ & What I mean by “funding overhang” (and why it doesn’t mean money isn’t helpful)
As an independent researcher, what are the biggest bottlenecks (if any) to your motivation, productivity, or impact? (would love more answers here)
Other posts I was really excited about (which don’t fall neatly into other categories)
Concrete Biosecurity Projects (some of which could be big)
What happens on the average day? & What’s alive right now?
StrongMinds should not be a top-rated charity (yet)
Why Neuron Counts Shouldn’t Be Used as Proxies for Moral Weight
Does Economic Growth Meaningfully Improve Well-being? An Optimistic Re-Analysis of Easterlin’s Research: Founders Pledge
How likely is World War III?
A major update in our assessment of water quality interventions
Ideal governance (for companies, countries and more)
Rational predictions often update predictably*
My thoughts on nanotechnology strategy research as an EA cause area & A new database of nanotechnology strategy resources
[Crosspost]: Huge volcanic eruptions: time to prepare (Nature) & Should GMOs (e.g. golden rice) be a cause area? & Flooding is not a promising cause area—shallow investigation & lots of other posts on less-discussed causes
Notes on Apollo report on biodefense
How moral progress happens: the decline of footbinding as a case study
Air Pollution: Founders Pledge Cause Report
Case for emergency response teams
What matters to shrimps? Factors affecting shrimp welfare in aquaculture (see my comment on the post)
Some observations from an EA-adjacent (?) charitable effort
A few announcements
Open Philanthropy’s Cause Exploration Prizes: $120k for written work on global health and wellbeing & Cause Exploration Prizes: Announcing our prizes
Announcing the Change Our Mind Contest for critiques of our cost-effectiveness analyses & The winners of the Change Our Mind Contest—and some reflections
AI posts
Let’s think about slowing down AI & Slightly against aligning with neo-luddites
Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover
Samotsvety’s AI risk forecasts
AI Safety Seems Hard to Measure
AI strategy nearcasting / How might we align transformative AI if it’s developed very soon?
Biological Anchors external review by Jennifer Lin (linkpost)
Counterarguments to the basic AI risk case
Resources and lists
EA Opportunities: A new and improved collection of internships, contests, events, and more.
EA & LW Forums Weekly Summary (21 Aug − 27 Aug 22’) — the whole series
Forecasting Newsletter: January 2022 — the whole series
The Future Fund’s Project Ideas Competition — this is still a bank of really interesting project ideas
See more posts like this here:
Research agendas, questions, and project lists
Practical
Take action
Posts I wrote or was involved with (I searched for these separately, these weren’t on my “wrapped” page)
Events, highlighting especially good content, resources
Winners of the EA Criticism and Red Teaming Contest & Announcing a contest: EA Criticism and Red Teaming
Resource for criticisms and red teaming
Other posts
EA should taboo “EA should”
Invisible impact loss (and why we can be too error-averse)
Against “longtermist” as an identity
Notes on impostor syndrome
Thoughts on Forum posts:
A Forum post can be short
You don’t have to respond to every comment
Link-posting is an act of community service
Niche vs. broad-appeal posts (& how this relates to usefulness/karma) (a sketch)
Epistemic status: an explainer and some thoughts
Crossposts
Distillation and research debt
Reasoning Transparency
See also the Forum Digest Classics.
This is literally me scanning through quickly and then using the # + [type a post name to get a hyperlinked post method to quickly insert links] method, then very roughly grouping into categories.
If I didn’t list a post, that doesn’t mean I didn’t think it was great or didn’t upvote it.
I think this is under rated:
Offering FAANG-style mock interviews
not in the top forum posts of the year or anything, but I’d want it to get more exposure
Love this! I found it really valuable to be reminded of all the posts I’ve read this year and to reflect on how they’ve shaped my thinking. Big props to everyone involved in making this.
This is so awesome, much cooler than Spotify Wrapped!!
I got LG for my forum alignment—I’m guessing that that’s the most common one?
Comment if you got a different one (unless you’d rather not (I guess you could make a throwaway account so that no one judges you for being CE)).
I got neutral evil 😳
I am CHAOTIC Good MUAAHAHA
Neutral good! Which is indeed how I identify.
I do predict that most EAs are either lawful good or neutral good.
Looking back on my upvotes, a surprisingly few great posts this year (< 10 if not ~5). Don’t have a sense of how things were last year.
If anyone has insights about what they find themselves marking as “most important” and want to share them with me, I would love to hear more about your ideas. Our goal with this is to learn how we can encourage more of the most valuable content, and your opinion is valuable!
You can message me on the Forum or schedule time to chat here.
Suggestion: make the wrap up easily shareable. Download as a .jpg for example.
I had to make a screenshot on my phone which did not captured everything.
I really enjoy this feature. :)