My main personal project for the summer is trying to figure out what I think about AI-risk, so I thought I should engage with the forum more to ask questions/solicit feedback. I’m currently a mathematics undergrad, about to start my 4th year, so part of this is trying to figure out whether or not I should pivot toward working in something closer to AI-risk.
About me—I first got interested in EA after reading Reasons and Persons in the summer of 2020. My main secondary academic interest in undergrad has been in political theory, so I’m very interested in questions such as whether naïve utilitarianism endorses political extremism, how that might be mitigated by a proper social epistemology, and what that might entail for consequentialists interested in voting/political process reform. I’m also very interested in the economics of cities and innovation, as well as understanding how we learn mathematics. I’m less sure how those topics fit in an EA framework, but I’m always interested in seeing what insights others might be able to bring to them from an EA standpoint.
The former is an argument for why AGI Safety is potentially a really big problem (maybe biggest problem of our lifetimes), and the latter is stepping into the internal thought processes of an individual trying to decide whether to work on AGI safety over other important longtermist causes.
Hi!
My main personal project for the summer is trying to figure out what I think about AI-risk, so I thought I should engage with the forum more to ask questions/solicit feedback. I’m currently a mathematics undergrad, about to start my 4th year, so part of this is trying to figure out whether or not I should pivot toward working in something closer to AI-risk.
About me—I first got interested in EA after reading Reasons and Persons in the summer of 2020. My main secondary academic interest in undergrad has been in political theory, so I’m very interested in questions such as whether naïve utilitarianism endorses political extremism, how that might be mitigated by a proper social epistemology, and what that might entail for consequentialists interested in voting/political process reform. I’m also very interested in the economics of cities and innovation, as well as understanding how we learn mathematics. I’m less sure how those topics fit in an EA framework, but I’m always interested in seeing what insights others might be able to bring to them from an EA standpoint.
Here’s hoping to learning a lot from y’all’s!
-- Edgar
Two articles that you might find helpful:
AGI Safety from First Principles by richard_ngo
My Personal Cruxes for Working on AGI Safety by Buck
The former is an argument for why AGI Safety is potentially a really big problem (maybe biggest problem of our lifetimes), and the latter is stepping into the internal thought processes of an individual trying to decide whether to work on AGI safety over other important longtermist causes.
Great to meet you! You might be interested in some posts in the AI forecasting and Estimation of existential risk categories, such as:
Draft report on existential risk from power-seeking AI
Survey on AI existential risk scenarios
I’ve also written a lot about AI risk on my shortform.