My background is in philosophy, so I’ve been familiar with (and convinced by) Peter Singer’s work since roughly 2007. I first heard about EA in early 2015 when Will MacAskill gave a talk at UT Austin, where I was working on my PhD. He and I chatted a bit about Bostrom’s Superintelligence, which I happened to be reading at the time for totally unrelated reasons. Talking about AI safety and global poverty (the subject of Will’s talk) in the same conversation was kind of a revelatory moment, and all of EA’s conceptual pieces just sort of fell into place.
The thing that keeps me motivated is how intrinsically interesting I find my research. Of course I hope to make a difference, but my work is so far removed from immediate measurable impact that I don’t really think about that on a day-to-day basis.
My background is in philosophy, so I’ve been familiar with (and convinced by) Peter Singer’s work since roughly 2007. I first heard about EA in early 2015 when Will MacAskill gave a talk at UT Austin, where I was working on my PhD. He and I chatted a bit about Bostrom’s Superintelligence, which I happened to be reading at the time for totally unrelated reasons. Talking about AI safety and global poverty (the subject of Will’s talk) in the same conversation was kind of a revelatory moment, and all of EA’s conceptual pieces just sort of fell into place.
The thing that keeps me motivated is how intrinsically interesting I find my research. Of course I hope to make a difference, but my work is so far removed from immediate measurable impact that I don’t really think about that on a day-to-day basis.