UC Berkeley’ 26
Dev Sajnani
Got introduced to effective altruism by a friend and found the forum on the effectivealtruism.org website. Was a lurker for quite a while before I made this post
Hi! My name is Dev and I’m 17 years old. I’m a current high school graduate about to start university in the fall of 2022. Looking forward to interacting here. I’m currently interested in a lot of areas—including global priorities research, AI alignment, existential and s-risk, and energy poverty—but I’m currently trying to figure out the best path I could take since I’m at quite an early stage in my career. Of these topics, I’d say I’m most well-informed about energy poverty and I’m currently reading Superintelligence to get a better idea of AI alignment. Not sure what I want to do to have the most impact as of yet, but I welcome anyone who might want to have a conversation.
A few random thoughts I have on this:
I’ve tried speaking to a few non-EA people (very few, countable on hand) and I kind of agree that they think you’ve watched way too much sci-fi when talking about AI safety but they don’t think it’s too far-fetched. A specific conversation I remember having made me realize that one reason might be that a lot of people simply think that they cannot do much about it. ‘Leave it to the experts’ or ‘I don’t know anything about AI and ML’ seems to be a thought that non-EA people might have on the issue, hence preventing them from actively trying to reduce the risk, if it finds a way into their list of important problems at all. There’s also the part about AI safety not being a major field leading to misconceptions like the need for a compsci PhD and a lot of technical math/CS knowledge to work in AI safety when there actually exist roles that do not require such expertise. This quite obviously prevents them from changing their career to work in AI safety, but, even more so, it discourages them to read about it at all (this might also be the reason why distillation of AI alignment work is in high demand) even though we see people read about international conflicts, nuclear risk, and climate change more frequently (I’m not sure of the difference in scale but I can personally vouch for this since I had never heard of AI alignment before joining the EA community).