I co-founded WhiteBox Research in 2023 and lead its operations and marketing. WhiteBox aims to solve open research problems in AI interpretability and develop more AI safety researchers in Southeast Asia. I’m also a co-founder and board member of EA Philippines.
I previously was a Group Support Contractor for the Centre for Effective Altruism (CEA) for two years, where I helped support EA groups around the world.
You can reach out to me at brian@whiteboxresearch.org or find me on LinkedIn.
Thanks for making this podcast feed! I have a few comments about what you said here:
I think if you are going to call this feed “Effective Altruism: An Introduction”, it doesn’t make sense to skew the selection towards longtermism so heavily. Maybe you should have phrased the feed as “An Introduction to Effective Altruism & Longtermism” given the current list of episodes.
In particular, I think it would be better if the Lewis Bollard episode was added, and one on Global Health & Dev’t, such as either the episode with Rachel Glennerster or James Snowden (which I liked).
If 80K wanted to limit the feed to 10 episodes, then that means 2 episodes would have to be taken out. As much as I like the episode with David Denkenberger, I don’t think learning about ALLFED is “core” to EA, so that’s one that I would have taken out. A 2nd episode to take out is a harder choice, but I would pick between taking one out among the episodes with Will MacAskill, Paul Christiano, or Hilary Greaves. I guess I would pick the one with Will, since I didn’t get much value from that episode, and I’m unsure if others would.
Alternatively, an easier solution is to expand the number of episodes in the feed to 12. 12 isn’t that much farther from 10.
I think it is important to include an episode on animal welfare and global health and development because
The EA movement does important work in these two causes
Many EAs still care about or work on these two causes, and would likely want more people to continue entering them
People who get pointed to this feed and don’t get interested in longtermism (or aren’t a fit for careers in it) might think that the EA movement is not for them, even when it could be, if they just learned more about animal welfare or global health and development.
As a broader point, when we introduce or talk about EA, especially with large reach (like 80K’s reach), I think it’s important to convey that the EA movement works on a variety of causes and worldviews.
Even from a longtermist perspective, I think the EA community is better the “broader” it is and the more it also includes work on other “non-longtermist” causes, such as global health and development and animal welfare. This way, the community can be bigger, and it’s probably easier to influence things for the long-term better the bigger the community is. For example, more people would be in government or in influential roles.
These are just my thoughts. I’m open to hearing others’ thoughts too!