Non-EA interests include chess and TikTok (@benthamite). We are probably hiring: https://ââmetr.org/ââhiring
Ben_Westđ¸
I appreciate you being willing to share your candid reasons publicly, Jesse. Best of luck with your future plans, and best of luck to Tristan and Mia!
EA Awards
I feel worried that the ratio of the amount of criticism that one gets for doing EA stuff to the amount of positive feedback one gets is too high
Awards are a standard way to counteract this
I would like to explore having some sort of awards thingy
I currently feel most excited about something like: a small group of people solicit nominations and then choose a short list of people to be voted on by Forum members, and then the winners are presented at a session at EAG BA
I would appreciate feedback on:
whether people think this is a good idea
How to frame thisâI want to avoid being seen as speaking on behalf of all EAs
Also if anyone wants to volunteer to co-organize with me I would appreciate hearing that
- Jan 19, 2025, 10:26 PM; 7 points) 's comment on What are we doÂing about the EA FoÂrum? (Jan 2025) by (
It looks like she did a giving season fundraiser for Helen Keller International, which she credits to the EA class she took. Maybe we will see her at a future EAG!
Gave ~50% of my income to my DAF. I will probably disburse it mostly to AI Safety things which make sense on â 5 year AGI timelines.
Adult film star Abella Danger apparently took an class on EA at University of Miami, became convinced, and posted about EA to raise $10k for One for the World. She was PornHubâs most popular female performer in 2023 and has ~10M followers on instagram. Her post has ~15k likes, comments seem mostly positive.
I think this might be the class that @Richard Y Chappellđ¸ teaches?
Thanks Abella and kudos to whoever introduced her to EA!
Thank you for sharing your donation choices!
This is great, thanks for doing this survey!
Kudos for making this post! I think itâs hard to notice when money would best we spent elsewhere, particularly when you do actually have a use for it, and I appreciate you being willing to share this.
Fair enough! fwiw I would not have guessed that most pause AI supporters have a p(doom) of 90%+. My guess is that the crux between you is actually that they believe itâs worth pushing for a policy even if you I think itâs possible you will change your mind in the future. (But people should correct me if Iâm wrong!)
âDemis Hassabis, reckless!â honestly feels to me like a pretty tame protest chant. I did a Google search for âprotestâ and this was the first result. Signs are things like âone year of genocide funded by UTâ which seems both substantially more extreme and less epistemically valid than calling Demis âreckless.â
My sense from your other points is that you just donât actually want pause AI to accomplish their goals, so itâs kind of over-determined for you, but if I wanted to tell a story about how a grassroots movement successfully got a international pause on AI, various people chanting that the current AI development process is reckless seems pretty fine to me?
Huh, fwiw this is not my anecdotal experience. I would suggest that this is because I spend more time around doomers than you and doomers are very influenced by Yudkowskyâs âdonât fight over which monkey gets to eat the poison banana firstâ framing, but that seems contradicted by your example being ACX, who is also quite doomer-adjacent.
In my post, I suggested that one possible future is that we stay at the âforefront of weirdness.â Calculating moral weights, to use your example.
I could imagine though that the fact that our opinions might be read by someone with access to the nuclear codes changes how we do things.
I wish there was more debate about which of these futures is more desirable.
(This is what I was trying to get out with my original post. Iâm not trying to make any strong claims about whether any individual person counts as âEAâ.)
Maybe instead of âwhere people actually listen to usâ itâs more like âEA in a world where people filter the most memetically fit of our ideas through their preconceived notions into something that only vaguely resembles what the median EA cares about but is importantly different from the world in which EA didnât exist.â
Congrats, Sjir!
This is great, thanks so much for writing it.
I like Bostromâs Letter from Utopia
EA in a World Where People Actually Listen to Us
I had considered calling the third wave of EA âEA in a World Where People Actually Listen to Usâ.
Leopoldâs situational awareness memo has become a salient example of this for me. I used to sometimes think that arguments about whether we should avoid discussing the power of AI in order to avoid triggering an arms race were a bit silly and self important because obviously defense leaders arenât going to be listening to some random internet charity nerds and changing policy as a result.
Well, they are and they are. Letâs hope itâs for the better.
- Nov 21, 2024, 9:04 PM; 7 points) 's comment on The imÂpact of the counÂterÂfacÂtual dolÂlar: a defence of your loÂcal food pantry by (
Thanks for writing this, Alene!
The reason I feel excited about dedicating my life to LIC, however, is because I believe we will win.
Is there something you could share about why you think this? E.g. have analogous projects succeeded before, have the previous cases had judgments indicating that the case would succeed on appeal, etc.?
I donât think MCF has a database (maybe @Joey đ¸ knows?) but this post and this post list their grants
Wow thatâs great. Congrats to you and all the organizers!