I also wanna give general encouragement for sharing a difficult rejection story.
Evie
“Agency” needs nuance
Seven ways to become unstoppably agentic
Acceptance and Commitment Therapy (ACT) 101
Networking for Nerds [linkpost]
(Removed this comment. Don’t know how to delete it.)
I feel concerned about versions of this where there is implicit social pressure to:
stay;
seem fine with the critiques given;
participate in the first place.
Like, if its implicitly socially costly to opt out, it’s pretty hard for an individual to opt out.
I also think that it is hard to avoid these pressure-y dynamics in practice. Especially when people really want to be included in the social group.
I can imagine a scenario where there is a subtext of:
“You can opt out. Of course. But, as we know, the real hard-core and truth-seeking people stay. And this social group values truth-seeking. So… you can leave. But that is an out-group thing to do. Come on guys, it’s virtuous to seek-truth! And we are just providing you with an opportunity to do that! Don’t tell me that you’d rather hide from the truth than get your feelings hurt.”
(I’m overdoing this a bit to illustrate my point.)
Thanks for writing this post ! It resonated and I feel like I’ve fallen into a similar mindset before.
It reminds me of a point made here: “like, will we wish in 5 years that EAs had more outside professional experience to bring domain knowledge and legitimacy to EA projects rather than a resume full of EA things?”
When reading the post, this felt especially true and unfortunate: “They get the reputation as someone who can “get shit done” but in practice, they’re usually solving ops bottlenecks at the cost of building harder-to-acquire skills.”
(Currently reading the post and noticing that many of the links go to the top of the same google doc. I assume this isn’t supposed to be the case. This could be because I’m on mobile, but also could be an error with the links.)
(Also congrats on your first forum post! Go you :) )
I’ve been considering writing a post about my experience of receiving a grant, and the downsides I didn’t anticipate beforehand. It would probably look similar to this comment but with more info.
This poem really made me smile; thanks for writing it Luke :)
Hey, thanks for writing.
I also used to feel extremely confused about this (e.g. I thought that in-person university groups were “woefully inefficient” compared to social media outreach). I did not understand why there weren’t EA youtubers or social media marketing campaigns. Much of my own social conscious had been shaped by online creators (e.g. veganism and social justice ideas), and it felt like a tragedy that EA was leaving so much lying on the table.
I now am less optimistic about short-form social media outreach. Mostly because:
It seems really hard to preserve epistemics in low fidelity mediums like TikTok;
I don’t see that much value in EA being a house-hold name, if it’s a meme-y, low-resolution version (but my mind could easily be changed on this);
I care about selecting for nerdiness and intellectual curiosity;
I’m cautious of EA being associated too much with specific influencers;
I don’t want EA to become (or be perceived as) a social media trend.
All that being said, I do think there are versions of social media outreach that could be great (and aren’t currently being done).
I’m excited about more longer form youtube content (e.g. Rob Miles). It would be cool if one of the LEEP founders/ CE incubatees started vlogging about the experience of running a high impact charity (or something similar).
Fwiw, youtuber Ali Abdaal has some videos promoting longtermism, 80k, and GWWC. And 80k is currently ramping up their marketing and starting to pay influencers to promote 80k.
Congrats for organising EAGx—that’s huge! :)
Sorry for being a downer, but I want to push back on the subtext that it’s (always) good for people to be willing to “lend a helping hand, whether it’s sending a message, reviewing a draft or hopping onto a call?”
My rough thoughts:
Some people say yes to too many things and don’t value their time highly enough.
Sometimes, it’s the right call for someone to say no to helping others in their immediate environment.
It’s often hard to say no, even when it’s the right call.
I’m worried about a culture where {saying yes to peoples’ requests} --> {you’re a nice and helpful person} --> {it’s good that you’re an EA because you’re warm and welcoming}.
I’m worried about the message “EA is warm and welcoming because people are willing to give you their time” making it harder for people to say no.
This might not be super relevant—especially if most of the audience would err on the side of not asking for help.
But just wanted to comment because it came to mind.
The overall message of “people are kind and not scary and probably willing to help” is a nice one though!
General encouragement for having done something risky (a wacky title) and then deciding against it and changing it. The first sentence of the changed post made me laugh.
“Better” could mean lots of things here. Including: more entertaining; higher quality discussion; more engagement; it’s surpassed a ‘critical mass’ of people to sustain a regular group of posters and a community; better memes; more intellectually diverse; higher frequency of high quality takes; the best takes are higher quality; more welcoming and accessible conversations etc.
The aims of EA Twitter are different to the forum. But I think the most important metrics are the “quantity of discussion” ones.
My impression is that:
There are more “high quality takes” on EA Twitter now than a year ago (mostly due to more people being on it and people posting more frequently).
The “noise:quality ratio” is pretty bad on EA Twitter. Most of the space seems dominated by shit posting and in-group memes to me.
Obvs, shit posting is fine if that’s what you want. But I think it’s useful to be clear what you mean when you say “better”. If someone was looking for high quality discussion about important ideas in the world, I would personally not recommend them EA Twitter.
Pretty confused by what some of the cause areas are (e.g. epistemic institutions). I expect my responses were less helpful/ accurate bc of not knowing what some of them meant.
What does PB/CM mean?
If I was going to spend longer on this post, I’d make it more empirical and talk through evidence for/against the effectiveness of ACT.
As it is, I didn’t want to spend significantly longer writing it, so I’ve gone for a summary of the core ideas—so that readers can assess the vibe and see if it’s something that sounds interesting to them.
This might have been the wrong call though.
A distinction I’ve found useful is “object-level” vs “social reality”. They are both adjectives that describe types of conversation/ ideas.
Object-level discussions are about ideas and actions (e.g. AI timelines, the mechanics of launching a successful startup). Object-level ideas are technical, empirical, and often testable. Object-level refers to what ideas are important or make sense. It is focused on truth-seeking and presenting arguments clearly.
Social reality discussions are about people and organisations (e.g. Will MacAskill, Open Philanthropy). Social reality is more meta, more abstract, and less testable than object-level. Social reality refers to which people are influential/powerful (and what they think), how to network with people, how to persuade people.
Object-level: What’s the probability of extinction this century?
Social reality: What does Toby Ord think is the probability of extinction this century?
I have found it very helpful to start labelling whether I’m in object-level conversation mode vs social reality conversation mode. It helps me notice when I’m deferring without having thought about it (e.g. “well, Will MacAskill says [x]” instead of asking myself what I think about [x]), or when I fall into a mode of chit-chatting about the who’s-who of EA, instead of trying to truth-seek (of course, chit chatting sometimes is fine—I just want to be intentional about when I’m doing it).
And social reality isn’t necessarily bad, but it’s helpful to flag when a conversation enters “social reality mode.”
I do think it’s good for many/more/most conversations to centre around the object-level. I am personally trying to move my ratio more towards object-level.
(This was a core theme of an Atlas camp I attended, which I found extremely valuable. The above definitions are loosely based on a message from Jonas, but I didn’t run them by him before posting.)
Sorry that your experience of this has been rough.
Some quick thoughts I had whilst reading:
There was a vague tone of “the goal is to get accepted to EAG” instead of “the goal is to make the world better,” which I felt a bit uneasy about when reading the post. EAGs are only useful in so far as they let community members to better work in the real world.
Because of this, I don’t feel strongly about the EAG team providing feedback to people on why they were rejected. The EAG team’s goals isn’t to advise on how applicants can fill up their “EA resume.” It’s to facilitate impactful work in the world.
I remembered a comment that I really liked from Eli: “EAG exists to make the world a better place, rather than serve the EA community or make EAs happy.”
[EDIT after 24hrs: I now think this is probably wrong, and that responses have raised valid points.] You say”[others] rely on EA grants for their projects or EA organizations for obtaining jobs and therefore may be more hesitant to directly and publicly criticize authoritative organizations like CEA.” I could be wrong, but I have a pretty strong sense that nearly everyone I know with EA funding would be willing to criticise CEA if they had a good reason to. I’d be surprised if {being EA funded} decreased willingness to criticise EA orgs. I even expect the opposite to be true.
(Disclaimer that I’ve received funding from EA orgs)
Sorry that the tone of the above is harsh—I’m unsure if it’s too harsh or whether this is the appropriate space for this comment.
I’ve err-ed on the side of posting because it feels relevant and important.