[Question] [Seeking Advice] 19y/​o deciding whether to drop dentistry double major for single CS major to save 4 years and focus on AI risks

Link post

TL;DR:
I’m a 19-year-old college freshman in Taiwan, and I can’t decide between double majoring in dentistry and CS(which takes 8 years) or single major in CS(takes only 4 years). I am already deeply committed to AI safety and read widely, but I lack people who can sanity-check my reasoning. I am looking for someone willing to occasionally discuss with me online, even asynchronously—no time pressure at all.

Hello everyone,

My name is Jack, a 19-year-old freshman in a Taiwanese university. I haven’t decided my major yet, but I have to decide it in a year. Before, my family had always expected me to be a dentist or doctor (the dental/​medical program takes 8 years. In Taiwan, dentistry and medicine are both an undergraduate entry system and students receive a dentist license after completing the 8-year curriculum. The college and dental/​medical school are combined as a 8 year dentistry/​medical major.)

I identify more with negative utilitarianism, so I hope to devote my career to s-risk reduction, especially AI s-risks.

I am currently deciding between 2 main choices: whether I should double major in dentistry and CS (I consider dentistry over medicine now because dentistry doesn’t need to complete 4 years of residency, can begin practicing and earn a full income 4 years earlier than doctors. Also the hourly wage of dentists is almost equal to average doctors in US and Taiwan. Nevertheless I’m uncertain, it’s possible that double majoring in medcine+CS is actually better) or majoring only in CS (which would take 4 years instead of 8). I am not worried about burnout, but the 4 years time cost is significant. At present, I lean around 70% toward switching to CS only, because either dentistry or medical training seems much less relevant to AI-related s-risks, but I am still genuinely unsure about the decision.

Below I summarize my main reasoning (I also have longer decision documents with more detailed frameworks, if anyone would like to read them).

Key question: How feasible is it to work on reducing s-risks while employed outside EA-aligned organizations?

As a freshman, I still lack deep expertise and intuition in AI. Despite reading nearly all relevant materials on 80,000 Hours, the AI Alignment Blue Dot course, EA Forum, LessWrong, CLR, CRS, Tomasik/​Baumann/​Vinding, and spending hundreds of hours studying, thinking, and discussing with different LLMs, I am only around 80% confident (not 90–100%) that AI s-risks outweigh global health work.

That said, we always need to decide under uncertainty, I think being 80% confident that AI s-risks are more important than human disease suffering is enough for decision. Ideally, if we don’t consider income, single majoring in CS is probably the best.

However, in reality I still need to consider:

  • job accessibility and income stability

  • the likelihood of being able to do genuinely altruistic work while employed outside EA

Although EA commonly says “talent is the bottleneck, not funding,” in reality it is quite hard to get full-time positions or independent research grants in EA organizations unless one is truly exceptional. Given that, instead of saying “talent constraint>funding constarint”, saying”top-level talent constraint>funding constraint>average-level talent constarint” seems more accurate.

Therefore, if I find out I don’t have high talent on AI s-risks research, an alternative strategy would be: Become a dentist(or doctor) and do earn to give. If a become a dentist, I can probably donate $100000 USD a year to AI s-risks researc. In my intuition, it looks impactful, but it seems most people in EA think earn to give is usually a suboptimal option to consider in AI risks.

Some may think that a single major in CS can also bring opportunities to earn money effectively. I think this is worth debating. Some EA experts I’ve talked to recently believe that AGI will replace many CS jobs. Therefore, perhaps 30% of CS graduates could face an unemployment crisis in 10 years. If that’s true, then it’ll be difficult to earn a high income with a single CS major, since the CS field is going to be extremely competitive. On the other hand, dentists seems less likely to be replaced by AI and lose jobs in near future.

However, if it’s feasible for most people to find a job in non-EA organizations(such as frontier AI labs or governments) that can also reduce s-risks effectively, the need of funding in s-risks would decrease, thus, working in non-EA orgs may be a better option than doing earn to give as a dentsit. That said, I’m really unsure if there are any accessible(that isn’t too hard to get hired) job opportunties in non-EA world that can effectively reduce AI s-risks.

My concern is that in such settings, I may not be able to do meaningfully altruistic work:

  • For example, in WAS field, almost no non-EA jobs involve directly reducing wild-animal suffering.

  • Academia research might allow it, but becoming a professor is extremely competitive.

  • For AI safety, many corporate AI positions are still not closely connected to reducing s-risk (e.g., preventing digital suffering). Leaders of fontier AI labs and governments seem to only care about near-term AI ethics and extinction risks.

My worry is a scenario where I major in CS, graduate, can hardly find EA jobs and grantings, also can’t earn to give, then end up working in non-EA positions until retirement without ever contributing meaningfully to the problems I care about.

If I double major in dentistry, it would open another opportunity to do earn to give(which may be important, but I’m unsure).

But this approach costs 4+ years of additional dentistry training and delays meaningful contribution until later than 2033 instead of 2029, which is a large opportunity cost. It’d be much more ideal to work in a non-EA world but also contribute altruistically.

However, I genuinely do not know which non-EA careers can effectively reduce s-risks. For example, there may be some opportunities in AI frontier labs, but it seems difficult to directly shape laws or policies aimed at reducing digital suffering, since companies currently lack incentives to prioritize this issue. As a result, any impact in such roles might be indirect.

My current estimate is that working in frontier AI labs or AI governance in a non-EA context might achieve only about 20–40% of the impact compared to being an independent researcher working directly on the most effective s-risk topics/​interventions. Though, there are probably many other non-EA world career choices that can effectively reduce AI s-risks that I don’t know. I’ll be happy if you can share these job opportunities with me.

Why I am seeking discussion partners

Although I’m having a really tough time deciding if I should do a double major or not, I’ll keep working on it with patience and persistence. It’s a crucial decision for my life because double majoring in dentistry lead to 4-years of opportunity cost (Double majoring in other subjects probably won’t take 4 years more, but in dentistry or medicine it’d definitely take 4 years more)

There are basically very few active EA local groups in Taiwan right now, so I lack people who can provide feedback of my thoughts. I am hoping to find someone (not necessarily an expert in AI s-risk specifically, opinions about general career decisions would also be valuable) who is willing to discuss with me occasionally.

Although I lack deep expertise, I can contribute thoughtful reasoning and outside-view criticisms perspectives. I have talked deeply with a few experts before, and most of them thought it’s really a beneficial experience for both of us, but many of those people are currently too busy to continue discussions.

Final request

If anyone is willing to discuss—by text, voice, or any platform you prefer—I would be extremely grateful. Any level of commitment is welcome. Even a single 10-minute message every few weeks would already help a lot. There is absolutely no obligation; stopping discussion at any time is completely fine. I am perfectly comfortable with slow, asynchronous conversation whenever you happen to have time.

If you’re open to talking, please either comment below, message me on EA Forum, or email me at: carlosgpt500@gmail.com

Thank you very much for reading.

No comments.