I have work experience in HR and Operations. I read a lot, I enjoy taking online courses, and I do some yoga and some rock climbing. I enjoy learning languages, and I think that I tend to have a fairly international/cross-cultural focus or awareness in my life. I was born and raised in a monolingual household in the US, but I’ve lived most of my adult life outside the US, with about ten years in China, two years in Spain, and less than a year in Brazil.
As far as EA is concerned, I’m fairly cause agnostic/cause neutral. I think that I am a little bit more influenced by virtue ethics and stoicism than the average EA, and I also occasionally find myself thinking about inclusion, diversity, and accessibility in EA. Some parts of the EA community that I’ve observed in-person seem not very welcoming to outsides, or somewhat gatekept. I tend to care quite a bit about how exclusionary or welcoming communities are.
I was told by a friend in EA that I should brag about how many books I read because it is impressive, but I feel uncomfortable being boastful, so here is my clunky attempt to brag about that.
Unless explicitly stated otherwise, opinions are my own, not my employer’s.
(not well thought-out musings. I’ve only spent a few minutes thinking about this.)
In thinking about the focus on AI within the EA community, the Fermi paradox popped into my head. For anyone unfamiliar with it and who doesn’t want to click through to Wikipedia, my quick summary of the Fermi paradox is basically: if there is such a high probability of extraterrestrial life, why haven’t we seen any indications of it?
On a very naïve level, AI doomerism suggests a simple solution to the Fermi paradox: we don’t see signs of extraterrestrial life because civilizations tend to create unaligned AI, which destroys them. But I suspect that the AI-relevant variation would actually be something more like this:
Like many things, I suppose the details matter immensely. Depending on the morality of the creators, an aligned AI might reach spend resources expanding civilization throughout the galaxy, or it might happily putt along maintaining a globe’s agricultural system. Depending on how an unaligned AI is unaligned, it might be focused on turning the whole universe into paperclips, or it might simply kill its creators to prevent them from enduring suffering. So on a very simplistic level it seems that the claim of “civilizations tend to make AI eventually, and it really is a superintelligent and world-changing technology” is consistent with reality of “we don’t observe any signs of extraterrestrial intelligence.”