Non-EA interests include chess and TikTok (@benthamite). We are probably hiring: https://ââmetr.org/ââhiring
Ben_Westđ¸
It looks like she did a giving season fundraiser for Helen Keller International, which she credits to the EA class she took. Maybe we will see her at a future EAG!
Gave ~50% of my income to my DAF. I will probably disburse it mostly to AI Safety things which make sense on â 5 year AGI timelines.
Adult film star Abella Danger apparently took an class on EA at University of Miami, became convinced, and posted about EA to raise $10k for One for the World. She was PornHubâs most popular female performer in 2023 and has ~10M followers on instagram. Her post has ~15k likes, comments seem mostly positive.
I think this might be the class that @Richard Y Chappellđ¸ teaches?
Thanks Abella and kudos to whoever introduced her to EA!
Thank you for sharing your donation choices!
This is great, thanks for doing this survey!
Kudos for making this post! I think itâs hard to notice when money would best we spent elsewhere, particularly when you do actually have a use for it, and I appreciate you being willing to share this.
Fair enough! fwiw I would not have guessed that most pause AI supporters have a p(doom) of 90%+. My guess is that the crux between you is actually that they believe itâs worth pushing for a policy even if you I think itâs possible you will change your mind in the future. (But people should correct me if Iâm wrong!)
âDemis Hassabis, reckless!â honestly feels to me like a pretty tame protest chant. I did a Google search for âprotestâ and this was the first result. Signs are things like âone year of genocide funded by UTâ which seems both substantially more extreme and less epistemically valid than calling Demis âreckless.â
My sense from your other points is that you just donât actually want pause AI to accomplish their goals, so itâs kind of over-determined for you, but if I wanted to tell a story about how a grassroots movement successfully got a international pause on AI, various people chanting that the current AI development process is reckless seems pretty fine to me?
Huh, fwiw this is not my anecdotal experience. I would suggest that this is because I spend more time around doomers than you and doomers are very influenced by Yudkowskyâs âdonât fight over which monkey gets to eat the poison banana firstâ framing, but that seems contradicted by your example being ACX, who is also quite doomer-adjacent.
In my post, I suggested that one possible future is that we stay at the âforefront of weirdness.â Calculating moral weights, to use your example.
I could imagine though that the fact that our opinions might be read by someone with access to the nuclear codes changes how we do things.
I wish there was more debate about which of these futures is more desirable.
(This is what I was trying to get out with my original post. Iâm not trying to make any strong claims about whether any individual person counts as âEAâ.)
Maybe instead of âwhere people actually listen to usâ itâs more like âEA in a world where people filter the most memetically fit of our ideas through their preconceived notions into something that only vaguely resembles what the median EA cares about but is importantly different from the world in which EA didnât exist.â
Congrats, Sjir!
This is great, thanks so much for writing it.
I like Bostromâs Letter from Utopia
EA in a World Where People Actually Listen to Us
I had considered calling the third wave of EA âEA in a World Where People Actually Listen to Usâ.
Leopoldâs situational awareness memo has become a salient example of this for me. I used to sometimes think that arguments about whether we should avoid discussing the power of AI in order to avoid triggering an arms race were a bit silly and self important because obviously defense leaders arenât going to be listening to some random internet charity nerds and changing policy as a result.
Well, they are and they are. Letâs hope itâs for the better.
- 21 Nov 2024 21:04 UTC; 7 points) 's comment on The imÂpact of the counÂterÂfacÂtual dolÂlar: a defence of your loÂcal food pantry by (
Thanks for writing this, Alene!
The reason I feel excited about dedicating my life to LIC, however, is because I believe we will win.
Is there something you could share about why you think this? E.g. have analogous projects succeeded before, have the previous cases had judgments indicating that the case would succeed on appeal, etc.?
I donât think MCF has a database (maybe @Joey đ¸ knows?) but this post and this post list their grants
Yes, thank you, I understand that weighting by budget results in the phenomenon you described. I didnât comment on this since it sounds like ACE is planning to change it anyway.
I was referring to the publicity listed in ACEâs review. The stories appear to be about the lawsuit so I am not entirely sure what you mean by âcould you provide evidence that this level of publicity was caused by LICâs lawsuitâ. See e.g. CNN, Fox.
To clarify: I donât care about causing burdens to Costco per se. The reason that burdens are relevant is because future companies might prefer to avoid that burden and instead change their policies. I agree it would be good to have a better model of when this would happen and would be excited for someone to make such a model!
do you have a sense of how to interpret the differences between options? E.g. I could imagine that basically everyone always gives an answer between 5 and 6, so a difference of 5.1 and 5.9 is huge. I could also imagine that scores are uniformly distributed between the entire range of 1-7, in which case 5.1 vs 5.9 isnât that big.
Relatedly, I like how you included âpositive actionâ as a comparison point but I wonder if itâs worth including something which is widely agreed to be mediocre (Effective Lawnmowing?) so that we can get a sense of how bad some of the lower scores are.
EA Awards
I feel worried that the ratio of the amount of criticism that one gets for doing EA stuff to the amount of positive feedback one gets is too high
Awards are a standard way to counteract this
I would like to explore having some sort of awards thingy
I currently feel most excited about something like: a small group of people solicit nominations and then choose a short list of people to be voted on by Forum members, and then the winners are presented at a session at EAG BA
I would appreciate feedback on:
whether people think this is a good idea
How to frame thisâI want to avoid being seen as speaking on behalf of all EAs
Also if anyone wants to volunteer to co-organize with me I would appreciate hearing that