Your 2022 EA Forum Wrapped đ
The EA Forum team is excited to share your personal âš 2022 EA Forum Wrapped âš. We hope you enjoy this little summary of how you used the EA Forum as you ring in the new year with us. Thanks for being part of the Forum!
Note: If you donât have an EA Forum account, we wonât be able to make a personalized âwrappedâ for you. If you feel like youâre missing out, today is a great day to make an account and participate more actively in the online EA community!
- Your EA FoÂrum 2023 Wrapped by 30 Dec 2023 23:12 UTC; 89 points) (
- Posts from 2022 you thought were valuable (or unÂderÂrated) by 17 Jan 2023 16:42 UTC; 87 points) (
- EA & LW FoÂrum SumÂmariesâHoliÂday EdiÂtion (19th Dec â 8th Jan) by 9 Jan 2023 21:06 UTC; 24 points) (
- EA & LW FoÂrum SumÂmariesâHoliÂday EdiÂtion (19th Dec â 8th Jan) by 9 Jan 2023 21:06 UTC; 11 points) (LessWrong;
Looks fun! Thanks for this. Curious about EA forum alignment methodology!
(also happy new year to the team, thanks for all your work on the forum!)
https://ââgithub.com/ââForumMagnum/ââForumMagnum/ââblob/ââ5f08a68cfd2eb48d5a2286962cd70ddfea9a97a6/ââpackages/ââlesswrong/ââserver/ââresolvers/ââuserResolvers.ts#L322-L339
I think it looks at engagement (I assume time spent on the Forum) and the comments/âposts ratio.
Forum Team, maybe link âalignmentâ at https://ââforum.effectivealtruism.org/ââwrapped to this comment rather than the Wikipedia page? (If Iâd been labeled âevilâ I think Iâd much rather be reassured that itâs for a completely irrelevant reason than linked to the D&D reference.)
For those who donât program, this is what ChatGPT says this code means:
âThis code defines a function
getAlignment
that takes in an object calledresults
and returns a string indicating a combination of good/âevil and lawful/âchaotic alignments.The function first initializes two variables,
goodEvil
andlawfulChaotic
, to the string âneutralâ and âNeutralâ, respectively. It then checks theengagementPercentile
property of theresults
object. If the engagement percentile is less than 0.33,goodEvil
is set to the string âevilâ. If the engagement percentile is greater than 0.66,goodEvil
is set to the string âgoodâ.The function then calculates the ratio of
commentCount
topostCount
in theresults
object, and uses this ratio to set the value oflawfulChaotic
. If the ratio is less than 3,lawfulChaotic
is set to âChaoticâ. If the ratio is greater than 6,lawfulChaotic
is set to âLawfulâ.Finally, the function checks if both
lawfulChaotic
andgoodEvil
are equal to âneutralâ. If they are, the function returns the string âTrue neutralâ. Otherwise, it returns the concatenation oflawfulChaotic
andgoodEvil
with a space in between.âI donât code, so I have no idea if this is accurate, so please let me know if itâs off.
I think itâs accurate, but I donât know if itâs clearer
Hereâs a shitty table that I think is clearer
Actually, ChatGPT does a decent job at that
This is arguably better than my table:
No, I think your table is substantially better than chatgptâs because it factors out the two alignment dimensions into two spatial dimensions.
I have to squint a lot to see the sense in this mapping
I donât think itâs meant to be taken seriously, just some whimsical easter egg
Iâm really glad we made this. :)
Scanning through my strong upvotes & upvotes, here are things I think are really valuable (not an exhaustive list![1]), grouped in very rough categories:
Assorted posts I thought were especially underrated
Most problems fall within a 100x tractability range (under certain assumptions)
Epistemic Legibility
Contra Ari Neâeman On Effective Altruism
Butterfly Ideas
The case for becoming a black-box investigator of language models
Methods-ish or meta posts
Terminate deliberation based on resilience, not certainty
Comparative advantage does not mean doing the thing youâre best at
Effectiveness is a Conjunction of Multipliers
Nonprofit Boards are Weird
Learning By Writing
How effective are prizes at spurring innovation?
How to do theoretical research, a personal perspective
Methods for improving uncertainty analysis in EA cost-effectiveness models
An experiment eliciting relative estimates for Open Philanthropyâs 2018 AI safety grants
Letâs stop saying âfunding overhangâ & What I mean by âfunding overhangâ (and why it doesnât mean money isnât helpful)
As an independent researcher, what are the biggest bottlenecks (if any) to your motivation, productivity, or impact? (would love more answers here)
Other posts I was really excited about (which donât fall neatly into other categories)
Concrete Biosecurity Projects (some of which could be big)
What happens on the average day? & Whatâs alive right now?
StrongMinds should not be a top-rated charity (yet)
Why Neuron Counts Shouldnât Be Used as Proxies for Moral Weight
Does Economic Growth Meaningfully Improve Well-being? An Optimistic Re-Analysis of Easterlinâs Research: Founders Pledge
How likely is World War III?
A major update in our assessment of water quality interventions
Ideal governance (for companies, countries and more)
Rational predictions often update predictably*
My thoughts on nanotechnology strategy research as an EA cause area & A new database of nanotechnology strategy resources
[Crosspost]: Huge volcanic eruptions: time to prepare (Nature) & Should GMOs (e.g. golden rice) be a cause area? & Flooding is not a promising cause areaâshallow investigation & lots of other posts on less-discussed causes
Notes on Apollo report on biodefense
How moral progress happens: the decline of footbinding as a case study
Air Pollution: Founders Pledge Cause Report
Case for emergency response teams
What matters to shrimps? Factors affecting shrimp welfare in aquaculture (see my comment on the post)
Some observations from an EA-adjacent (?) charitable effort
A few announcements
Open Philanthropyâs Cause Exploration Prizes: $120k for written work on global health and wellbeing & Cause Exploration Prizes: Announcing our prizes
Announcing the Change Our Mind Contest for critiques of our cost-effectiveness analyses & The winners of the Change Our Mind Contestâand some reflections
AI posts
Letâs think about slowing down AI & Slightly against aligning with neo-luddites
Without specific countermeasures, the easiest path to transformative AI likely leads to AI takeover
Samotsvetyâs AI risk forecasts
AI Safety Seems Hard to Measure
AI strategy nearcasting /â How might we align transformative AI if itâs developed very soon?
Biological Anchors external review by Jennifer Lin (linkpost)
Counterarguments to the basic AI risk case
Resources and lists
EA Opportunities: A new and improved collection of internships, contests, events, and more.
EA & LW Forums Weekly Summary (21 Aug â 27 Aug 22â) â the whole series
Forecasting Newsletter: January 2022 â the whole series
The Future Fundâs Project Ideas Competition â this is still a bank of really interesting project ideas
See more posts like this here:
Research agendas, questions, and project lists
Practical
Take action
Posts I wrote or was involved with (I searched for these separately, these werenât on my âwrappedâ page)
Events, highlighting especially good content, resources
Winners of the EA Criticism and Red Teaming Contest & Announcing a contest: EA Criticism and Red Teaming
Resource for criticisms and red teaming
Other posts
EA should taboo âEA shouldâ
Invisible impact loss (and why we can be too error-averse)
Against âlongtermistâ as an identity
Notes on impostor syndrome
Thoughts on Forum posts:
A Forum post can be short
You donât have to respond to every comment
Link-posting is an act of community service
Niche vs. broad-appeal posts (& how this relates to usefulness/âkarma) (a sketch)
Epistemic status: an explainer and some thoughts
Crossposts
Distillation and research debt
Reasoning Transparency
See also the Forum Digest Classics.
This is literally me scanning through quickly and then using the # + [type a post name to get a hyperlinked post method to quickly insert links] method, then very roughly grouping into categories.
If I didnât list a post, that doesnât mean I didnât think it was great or didnât upvote it.
I think this is under rated:
Offering FAANG-style mock interviews
not in the top forum posts of the year or anything, but Iâd want it to get more exposure
Love this! I found it really valuable to be reminded of all the posts Iâve read this year and to reflect on how theyâve shaped my thinking. Big props to everyone involved in making this.
This is so awesome, much cooler than Spotify Wrapped!!
I got LG for my forum alignmentâIâm guessing that thatâs the most common one?
Comment if you got a different one (unless youâd rather not (I guess you could make a throwaway account so that no one judges you for being CE)).
I got neutral evil đł
I am CHAOTIC Good MUAAHAHA
Neutral good! Which is indeed how I identify.
I do predict that most EAs are either lawful good or neutral good.
Looking back on my upvotes, a surprisingly few great posts this year (< 10 if not ~5). Donât have a sense of how things were last year.
If anyone has insights about what they find themselves marking as âmost importantâ and want to share them with me, I would love to hear more about your ideas. Our goal with this is to learn how we can encourage more of the most valuable content, and your opinion is valuable!
You can message me on the Forum or schedule time to chat here.
Suggestion: make the wrap up easily shareable. Download as a .jpg for example.
I had to make a screenshot on my phone which did not captured everything.
I really enjoy this feature. :)