Working on AI governance and policy at Open Philanthropy.
Hater of factory farms, enjoyer of effective charities.
Working on AI governance and policy at Open Philanthropy.
Hater of factory farms, enjoyer of effective charities.
Thank you for the work you and your team do, Julia. Many of these situations are incredibly tricky to handle, and I’m very grateful the EA community has people working on them.
Here is a first stab I took at organising some pieces of content that would be good to test your fit for this kind of work. I tried to balance it as much as I could with respect to length, difficulty, format, and cause area.
+1 — the wiki is awesome! Though I’d love to see specific distillations of standalone written works, in addition to topic-style distillations seen on the wiki.
I’m going to write out a list of ~10-15 pieces of content I think would be good to distill, and I’ll share it here once I’m finished.
(1) — I think there is probably a correlation between good distillers and good researchers, but it isn’t one-to-one. Distillers probably have a stronger comparative advantage in communication and simplification, whereas researchers probably would be better at creativity and diving deep into specific focus areas. It seems like a lot of great academics struggle with simplifying and broadcasting their core ideas to a level of abstraction that a general audience can understand.
(2) — completely agree, I think it would a great skill signal.
I love the fellowship idea as well!
Ah I totally forgot to include a footnote about the Nonlinear library! For me, it’s helpful, but I sometimes find the text-to-speech a bit hard to focus on because it isn’t quite natural. But maybe I’m just a pedant.
Maybe, but I think it would be good if someone built a really strong comparative advantage with this. Describing, and then evaluating the success criteria of bounties could have some slightly burdensome overhead as well.
Also +1 that having hubs in US and UK is sub-optimal.
To your knowledge, have there been any efforts to systematically compare different hub candidates? I’d be curious to see the reasoning behind why location A might be more preferable than B, C, D, etc.
Hey! A few thoughts:
From an instrumental POV, donating to an effective charity that keeps you motivated to continue direct work is probably a good strategy. I sometimes donate to the LTFF, but would probably feel less motivated if all of my donations went there. “Fuzzies” from AMF help me stay motivated, and I think that increases my overall impact. If you’re concerned about the direct wording of the pledge and you feel longtermist charities are better in that regard, there’s probably some allocation between those and GH&D charities that would allow you to be in the sweet spot.
I can’t quite articulate why, but I feel that effective giving should be exciting and motivating. I would be sad if I knew people were stressing out about whether they’re doing the most good (impossible to tell) when they’re already doing a hell of a lot of good by giving to charities like AMF. Being excited and signalling to others that effective giving isn’t a chore is also good for inspiring others.
GWWC as an organisation recommends GiveWell top charities (what I assume you mean by GH&W charities), and a large fraction of the community gives there as well. That should be a pretty strong signal.
Within the broad portfolio of effective charities that EAs support, the decision of where to give often hinges on worldviews that are highly uncertain (should I donate to AMF, or the good food institute, or a wild animal research institute? What about a longtermist charity? It’s unclear a priori). To me, it’s perfectly sensible to hedge a bit with your overall altruistic portfolio (donate to neartermist global health charities while still doing direct longtermist work).
All of this stuff is really hard and unclear, so just do what you think is the best while also remembering it is nearly impossible to be a perfect effective altruist who always “maximizes the good”. And be proud of yourself for caring enough to think about it :)
Thanks for writing this!
A few thoughts:
You touch on this briefly at the end, but I think what is missing from these “consider leaving EA posts” is what one might do before getting to the point where leaving is the best option. What might some early warning signs be, and what are some less intense measures someone might wish to consider prior to leaving entirely?
I wonder what we can do as a community to make these sorts of considerations easier on people. How might we be able to build structures/processes that can help people before it gets bad?
I think you might be underestimating the difficulty of coming back to the community after leaving. A few things that might be difficult about returning are (a) feeling like you’re out of the epistemic loop, (b) feeling weird about suddenly reappearing when people might notice you’ve been gone, and (c) needing to make life decisions while you’re gone (finding new jobs, moving, etc).
Thanks for sharing, Michael! This was super informative and interesting.
EA is neglecting trying to influence non-EA organizations, and this is becoming more detrimental to impact over time.
+1 to this — it’s something I’ve been thinking about quite a bit lately, and I’m happy you mentioned it.
I’m not convinced the EA community will be able to effectively solve the problems we’re keen on tackling if we mainly rely on a (relatively) small group of people who are unusually receptive to counterintuitive ideas, especially highly technical problems like AI safety. Rather, we’ll need a large coalition of people who can make progress on these sorts of challenges. All else equal, I think we’ve neglected the value of influencing others, even if these folks might not become highly active EAs who attend conferences or whatever.
On the contrary, my best guess is that the “dying with dignity” style dooming is harming the community’s ability to tackle AI risk as effectively as it otherwise could
Let’s fulfil Mill’s wishes by buying some coal mines.
Ah. Duh. My bad!
Right now I’m an MSc student at the Oxford Internet Institute studying part-time for a degree in social science of the internet, with a focus on economics.
I also work on content & research at Giving What We Can, which mostly involves simplifying and translating core EA ideas to something that a general audience would like to read/watch.
This summer I will be self-studying AI governance and then joining GovAI as a summer research fellow. Provided this path seems promising for me, I’m hoping to work at the intersection of policy and research in the AI governance field for the foreseeable future.
I’d like to specifically get better at writing more clearly, critical/creative thinking (using helpful mental models, having better reasoning transparency, and generally being more rational), and researching (more specifically, better at reading/interpreting a lot of existing research and forming my own inside view more quickly). More generally, I think I could probably also use better quantitative skills (economic modelling plus interpreting/working with data/statistics). I could also be a more organised person.
I’d also like to start working on my leadership skills so I’m better prepared for later on when I become a more senior member of whatever team I’m on.
For global health & development, I think it is still quite useful to have influence over things like research and policy prioritisation (what topics academics should research, and what areas of policy think tanks should focus on), government foreign aid budgets, vaccine r&d, etc. This is tangential, but even if Dustin is worth a large number of low-value donors (he is), the marginal donation to effective global poverty charities is still very impactful.
For AI, I agree that it is tricky to find robustly net-positive actions, as of right now at least. I expect this to change over the next few years, and I hope people in relevant positions to implement these actions will be ready to do so once we have more clarity about which ones are good. Whether or not they’re highly engaged EAs doesn’t seem to matter inasmuch as they actually do the things, IMO.