I think that when discussing career longtermism we should keep the possibility of short AGI timelines in consideration (or the possibility of some non-AI related existential catastrophe occuring in the short-term). By the time we transition from learning and building career capital to trying to impact the world, it might be too late to make a difference. Maybe an existential catastrophe has already occurred or AGI was successful and so outclasses us that all of that time building career capital was wasted.
For example, I am in my first year of an economics PhD. Social impact through academia is very slow. I worry that before I am able to create any impact through my research it might be too late. I chose this path because I believe it will give me valuable and broadly robust skills that I could apply to creating impactful research. But now I wonder if I should have pursued a more direct and urgent way of contributing to the long-term future.
Many EAs, like me, have chosen paths in academia, which has a particularly long impact trajectory and thus is more prone to short timelines.
PS: I recently switched to the Microsoft Edge web browser and was intrigued to see if the Bing AI could help me write this comment. The final product is a heavily edited version of the final output it gave after multiple prompt attempts. Was it faster/better than just writing the entire comment myself? Probably not.
Thanks for your comment, and that’s a fair point/critique—I agree about impact through academia being slow. However, at this stage it’s pretty difficult to plan for what jobs you should be training for if AI replaces your current role, so it still makes sense to do something that broadly expands your career capital as you state, whether this is a PhD or something else. I would have thought the likelihood of an X-risk happening within the time you do your PhD is probably quite small, but I’ll leave the quantification to the experts! AI is probably least likely to impact some more practical and non-academic roles so this could be an argument for gaining career capital outside of the knowledge sector (e.g. see this Times article: bit.ly/3M8Utpr). I didn’t know the Bing AI had been rolled out yet—I’ll have to give that a try and I’m curious how it will develop over time, and how quickly—and whether it will make my new job quicker and/or ultimately replace me or some of the workforce.
My argument doesn’t hang on whether an X-risk occurs during my PhD. If AGI is 10 years away, it’s questionable whether investing half of that remaining time into completing a PhD is optimal.
I think that when discussing career longtermism we should keep the possibility of short AGI timelines in consideration (or the possibility of some non-AI related existential catastrophe occuring in the short-term). By the time we transition from learning and building career capital to trying to impact the world, it might be too late to make a difference. Maybe an existential catastrophe has already occurred or AGI was successful and so outclasses us that all of that time building career capital was wasted.
For example, I am in my first year of an economics PhD. Social impact through academia is very slow. I worry that before I am able to create any impact through my research it might be too late. I chose this path because I believe it will give me valuable and broadly robust skills that I could apply to creating impactful research. But now I wonder if I should have pursued a more direct and urgent way of contributing to the long-term future.
Many EAs, like me, have chosen paths in academia, which has a particularly long impact trajectory and thus is more prone to short timelines.
PS: I recently switched to the Microsoft Edge web browser and was intrigued to see if the Bing AI could help me write this comment. The final product is a heavily edited version of the final output it gave after multiple prompt attempts. Was it faster/better than just writing the entire comment myself? Probably not.
Thanks for your comment, and that’s a fair point/critique—I agree about impact through academia being slow. However, at this stage it’s pretty difficult to plan for what jobs you should be training for if AI replaces your current role, so it still makes sense to do something that broadly expands your career capital as you state, whether this is a PhD or something else. I would have thought the likelihood of an X-risk happening within the time you do your PhD is probably quite small, but I’ll leave the quantification to the experts! AI is probably least likely to impact some more practical and non-academic roles so this could be an argument for gaining career capital outside of the knowledge sector (e.g. see this Times article: bit.ly/3M8Utpr). I didn’t know the Bing AI had been rolled out yet—I’ll have to give that a try and I’m curious how it will develop over time, and how quickly—and whether it will make my new job quicker and/or ultimately replace me or some of the workforce.
My argument doesn’t hang on whether an X-risk occurs during my PhD. If AGI is 10 years away, it’s questionable whether investing half of that remaining time into completing a PhD is optimal.