I’m at EAG NYC right now, and as part of our Forum writing session, we’re asking participants to respond to this quick take with their Forum post ideas. Encourage ideas you’d like to see! It’s Draft Amnesty next week…
Post idea as an ex-USAID advisor: I’ve been struck by how little overlap there is between the USAID/GH world and the EA community at EAG NYC. I’ve met maybe three others from USAID out of ~800 people at the conference, and with so many talented folks now in transition, it feels like the right moment to ask where they might (or might not) fit in EA spaces.
Thanks for bringing this up, Camille! Noting that we at Probably Good would be happy to speak to you or others like you transitioning from USAID on where you might best use your talent and experience
Probably EA was not overlapping with USAID causes, because they were not neglected. USAID was covering those needs. With USAID gone, EA should reevaluate neglectedness in previously covered areas and probably shift interest towards the newly neglected areas.
1- Increasing EA effectiveness by finding opportunities to leverage resources that are currently not accepted into EA roles. This could be its own short draft placeholder post.
2- Disagreement on longtermism reasoning (considering future impact on some interventions while ignoring the long term impact of current interventions). This would be a reply to a current post/chapter
Pantheon is a show that ended up on Netflix recently and covers a singularity caused by uploads. I think it gives really good intuitions on what a fast AI takeoff might look like.
Post idea: a history of my time as a volunteer organizer of EA NYC. I am talking through the right presentation format for this. Potentially a brief post with some history and possible topics / request for opinions on items to expand upon
Idea: write a blogpost about why research is a distribution so it seems a bunch of EAs often forget that this framework of “useful”/“not useful” work is (haha) not useful and we should be looking at EVs. EAs know that but often acts as if they don’t. (A.k.a: empirical people still seem to criticize theory saying it’s useless when actually in the face of short timelines, I don’t really see how some random specific circuit is going to save us from ASI)
We should make a hedge for the possibility that AI safety in the US could become polarized in the future and might be seen as a left only issue. So maybe we should consider making AI Pause and give it right wing branding for a hedge.
Book review of sovereign individual, and an update on its core thesis given China managed to prevent sovereign individuals from becoming more powerful than the state
Always looking for more book reviews. Does this have relevance to effective altruism? I think it could—for example, I wonder what it might say about controlling power-seeking AI/ AI companies.
I’m at EAG NYC right now, and as part of our Forum writing session, we’re asking participants to respond to this quick take with their Forum post ideas.
Encourage ideas you’d like to see! It’s Draft Amnesty next week…
Post idea as an ex-USAID advisor: I’ve been struck by how little overlap there is between the USAID/GH world and the EA community at EAG NYC. I’ve met maybe three others from USAID out of ~800 people at the conference, and with so many talented folks now in transition, it feels like the right moment to ask where they might (or might not) fit in EA spaces.
Love this idea!
Thanks for bringing this up, Camille! Noting that we at Probably Good would be happy to speak to you or others like you transitioning from USAID on where you might best use your talent and experience
Probably EA was not overlapping with USAID causes, because they were not neglected. USAID was covering those needs. With USAID gone, EA should reevaluate neglectedness in previously covered areas and probably shift interest towards the newly neglected areas.
I may be posting two:
1- Increasing EA effectiveness by finding opportunities to leverage resources that are currently not accepted into EA roles. This could be its own short draft placeholder post.
2- Disagreement on longtermism reasoning (considering future impact on some interventions while ignoring the long term impact of current interventions). This would be a reply to a current post/chapter
Some ideas I had:
Advice I found useful for my PhD decision
Transformers Struggle to Learn Search and my model of LLM capabilities
Why EA and AI Safety groups should recommend watching Pantheon
A meta-analysis of university group organizer advice posts
Love the meta-analysis idea. What’s pantheon?
Pantheon is a show that ended up on Netflix recently and covers a singularity caused by uploads. I think it gives really good intuitions on what a fast AI takeoff might look like.
Post idea: a history of my time as a volunteer organizer of EA NYC. I am talking through the right presentation format for this. Potentially a brief post with some history and possible topics / request for opinions on items to expand upon
Do we need a yearly strategy paper of the EA movement?
I’m not there, but yes, I would support this idea!
Idea: write a blogpost about why research is a distribution so it seems a bunch of EAs often forget that this framework of “useful”/“not useful” work is (haha) not useful and we should be looking at EVs. EAs know that but often acts as if they don’t. (A.k.a: empirical people still seem to criticize theory saying it’s useless when actually in the face of short timelines, I don’t really see how some random specific circuit is going to save us from ASI)
What I learned organizing a uni group for a semester
Diferences on my experience on my local EA and internacional EA
“I thought all charities were meant to be effective”—a quote from someone at EAG NY
We should make a hedge for the possibility that AI safety in the US could become polarized in the future and might be seen as a left only issue. So maybe we should consider making AI Pause and give it right wing branding for a hedge.
Book review of sovereign individual, and an update on its core thesis given China managed to prevent sovereign individuals from becoming more powerful than the state
Always looking for more book reviews. Does this have relevance to effective altruism? I think it could—for example, I wonder what it might say about controlling power-seeking AI/ AI companies.