Howdy!
I am Patrick Hoang, a student at Texas A&M University.
Howdy!
I am Patrick Hoang, a student at Texas A&M University.
A lot of EAs do think AI safety could become ridiculously important (i.e. some probability mass of very short timelines) but are not in the position to do anything, which is why they focus on more tractable areas (i.e. global health, animal welfare, EA building) under the assumption of longer AI timelines. Especially because there’s a lot of uncertainty about when AGI would come.
My internal view is 25% of TAI by 2040 and 50% of TAI by 2060, where I define TAI as an AI with the ability to autonomously perform AI research. They may have shifted in light of DeepSeek but what am I supposed to do? I’m just a freshman college student at a non-prestigious university. Am I supposed to drop all commitments I have, speed-run my degree, get myself to work in a highly competitive AI lab which would probably require a Ph. D., work on technical alignment hoping to get a breakthrough? If TAI comes within 5 years, it would be the right move, but if I’m wrong then I would end up with very shallow skills without much experience.
We have the following Pascal matrix (drafted my GPT):
Decision | AGI Comes Soon (~2030s) | AGI Comes Late (~2060s+) |
---|---|---|
Rush into AI Now | 🚀 Huge Impact, but only if positioned well | 😬 Career stagnation, lower expertise |
Stay on Current Path | 😢 Missed critical decisions, lower impact | 📈 Strong expertise, optimal positioning |
I know the decision is not binary, but I am definitely willing to forfeit 25% of my impact by betting on the AGI comes late scenario. I do think non-AI cause areas should use AI projection in their deliberation and ToC but I think it is silly to cut out everything that happens after 2040 with respect to the cause area.
However, I do think EAs should have a contingency plan where they should speedrun to AI safety if and only if (one of multiple conditions occur; i.e. even conservative superforecastors project AGI before 2040, or a national emergency is declared). And we can probably hedge against the AGI comes soon scenario by buying long-term NVIDIA call options.
I believe that everyone in EA should try to use the SHOW framework to really see how they can advance their impact. To reiterate:
1. Get Skilled: Use non-EA opportunities to level up on those abilities EA needs most.
2. Get Humble: Amplify others’ impact from a more junior role.
3. Get Outside: Find things to do in EA’s blind spots, or outside EA organizations.
4. Get Weird: Find things no one is doing.
I do think getting skilled is the most practical advice. And if that fails, you can always get humble: if you make an EA 10% more effective you already contributed 10% of their impact!
I would like to add the EA forum currently feels too insular; the type of people who post on the forum are “too EA”-like in a sense (probably due to higher standards intimidating people). Specifically, it adds to the perception of EA being a closed community, possibly even a cult.
Now of course it is good to have the forum dedicated to the most active Highly Engaged EAs, but this kind of leads to greater community homogeneity (as EA-adjacent people would just not interact). I don’t know the best solution to this, but I do prefer the style to be closer to 80000 Hours which I found to be more grounded. I do like the solution of reframing the EA forum as an intranet. Or maybe have a casual EA forum space and then a more dedicated one.
Hello! I’m also in the same boat as you—I’m also a high school senior interested in doing some remote EA opportunities during the summer. It’s more that I want to test my personal fit with different fields or at least develop some general skills.
If a podcast will make you 9x more effective, then theoretically you should spend up to 80000 hours out of 90000 hours of your career searching for that multiplier.
What should EAs not in a position to act under short AI timelines do? You can read my response here but not all of us are working in AI labs nor expect to break in anytime soon.
You also suggested having a short-timeline model to discount things after 5+ years:
But I wouldn’t apply such a huge discount rate if one still believes for a chance of longer AGI timelines. For example, if you believe AGI only has a 25% chance of occurring by 2040, you should discount 15+ year plans only by 25%. The real reason to discount certain long-term plans is because they are not tractable. (i.e. I think executing a five-year career plan is tractable, but ALS5-level security is probably not due to government’s slow speed)