It’s the correct figure—catering in CA especially is just that expensive.
ES
A Report on running 1-1s with EA Virtual Programs’ Participants
EAGx application community norms we’d like to see
How local groups can leverage EA conferences
Catering can often be a lot more than $30 USD per person per meal. And it’s also sometimes necessary to go with a certain catering company and meet a minimum in order to book a venue.
Many venues don’t let you bring in outside food. Catering is extremely overpriced (the cost per person per meal) and oftentimes you can’t sign a contract for a venue without agreeing to catering costs.
Improving EA events: start early & invest in content and stewardship
Thanks Nicole (and I’m impressed how quick you are on the write up)!! I’m so glad people enjoyed it. For other EAG / EAGx organisers—this was a great addition to our conference and was great as a tangible next step for what attendees could do to stay further engaged. Would highly recommend.
This is an amazing guide ❤️ Thank you for sharing!! I’d also be curious to see a post with more details on building the Spanish-speaking EA community (what you’ve done, what’s been successful, etc) as a model for other community building efforts
Get everyone to stop using EA as a homogeneous group—ex. by making sure in the Intro fellowship the fact that there is no one-EA is a key point, getting people to not use that language on the Forum, etc
What role do you think bioethics and bioethicists have on biosecurity and AI regulation? I’ve been thinking a lot about how represented bioethics should be in the two fields and if advocating for their increased involvement would help reduce x-risks. Thoughts?
There should be alternatives to EAGs/EAGxs—one’s that are cause area specific and/or for people interested in EA ideas but not necessarily needing to call yourself an EA.
Responding to the attention on Kathy’s specific case (I’m aware I’m adding more to it) - I think we’re detracting from the key argument that the EA community as a whole is neglecting to validate and support community members who experience bad things in the community
In this post, it’s women and sexual assault primarily. But there are other posts (1, 2) exempifying ways the EA community itself can and should prioritise internal community health. To argue the truth of one specific example might be detracting from recognising that this might be a systematic problem.
That’s a true point—but I don’t a good objective. EA should strive to exist with the best, highly-aligned to doing good people and I think we need a culture the prioritises people’s lived experiences, feelings, and interactions for that to happen.
I’m strongly in favour of this—it often feels like the need is to make this public so it becomes something the entire community is responsible for—as opposed to how it currently is (private and something CEA’s comm health mainly is responsible for).
Your comment (at least how it’s read as, maybe different from your intentions) reads as “that’s a particularly problematic location, just go to a different one”.
That doesn’t solve the problem. That doesn’t hold the Bay * or any community accountable or push for change in a positive direction. I think that sort of logic is a common response to what Maya writes about and doesn’t help or make anything better.
*and this is coming from an ex-Berkeley community builder
(saying this in a friend capacity and in shock that I haven’t introduced you two already) - you two should definitely talk!
Doing Ops in EA FAQ: before you join (2022)
I love it! Also having a tag all the time for drafts and unfinished thoughts would be nice too?
I can message you more if you want - but generally I think doing 1-1′s with new-er EA’s (or people who wouldn’t necessarily even call themselves an EA—like people in the intro. fellowship generally) requires extra transparency and communication around expectation setting and goals.
This generally just for us looked like making it clear in the email / form what 1-1′s are, what the purpose of them is, what it isn’t (ex. it isn’t making a career connection to simply get them to leave their job for an EA aligned one), etc. And then making it clear to the people doing the 1-1 some norms around setting expectations, approaching EA as an open question, and not assuming X person is interested doing the most effective things or taking Y action.