Jamie is a Program Associate at Polaris Ventures, doing grantmaking to support projects and people aiming to build a future guided by wisdom and compassion for all. Polaris’ focus areas include AI governance, digital sentience, plus reducing risks from fanatical ideologies and malevolent actors.
He also spend a few hours a week as a Fund Manager at the Effective Altruism Infrastructure Fund, which aims to increase the impact of projects that use the principles of effective altruism, by increasing their access to talent, capital, and knowledge.
Lastly, Jamie is President of Leaf, an independent nonprofit that supports exceptional teenagers to explore how they can best save lives, help others, or change the course of history. (Most of the hard work is being done by the wonderful Jonah Boucher though!)
Jamie previously worked as a teacher, as a researcher at the think tank Sentience Institute, and as co-founder and researcher at Animal Advocacy Careers, which helps people to maximise their positive impact for animals.
I didn’t write that wording originally (I just copied it over from this post), so I can’t speak exactly to their original thinking.
But I think the phrasing includes the EA community, it just uses the plural to avoid excluding others.
Some examples that jump to mind:
EA
Rationality, x-risk, s-risk, AI Safety, wild animal welfare, etc to varying degrees
Org-specific communities, e.g. the fellows and follow-up opportunities on various fellowship programmes.
I think this suggests more of a sense of unity/agreement than I expect is true in practice. These are complex things and individuals have different views and ideas!
Thanks for thinking this stuff through and coming up with ideas!