Update: After some self-thinking, I think maybe reading the basics of economics would be helpful given currently I bascially don’t know any economics
jackchang110
Thanks for your replying a lot. I’m really grateful for this.
Yeah, I agree most senior EAs don’t have like a completely accurate math model to calculate how worth is earn to give. However, I believe they have a thinking framework which can have an approximate answer that’s way more accurate than my own thoughts. Which means, there should be a guessing framework(on value of earn to give, which is probably complicated)to make a best guess rather than random guessing.
Hello Mr.Ling:
I’ve recieved your email and have replied you.
Thanks a lot for your kindness.
Thank you very much for your kindness, I would email you later
Thanks for your answering a lot first. Well, I know that most EA organizations and grantmakers said talent is primary constraint. However the fact seems to be it’s very difficult to get a job in EA organizations. I’m unsure, but it also seems difficult(like less than 50% success rate) to get independent research fundings from grantmakers. Of course that if you have great talent on researching it’ll be way easier to get fundings, but I’ll probably just become a mediocrity researcher, therefore I probably can’t rely on EA grantmakers to support me.
What do you think about my main question: Is it difficult to find or create altruistic work within non-EA organizations?(especially in reducing AI s-risks)
Hello Thao: Thanks a lot for your patience replying first.
I don’t think double majoring itself is difficult, but it is very time-consuming. It would require 4–5 additional years of studying medicine and doing hospital internships. Since I believe AI s-risks are probably far more important than bio x-risks and global health, I think it makes more sense to major only in CS and contribute directly, rather than spending those extra years learning medicine.
However, I’m worried that without enough financial security, I might end up working in non-EA organizations until retirement and be unable to focus on the most altruistic work. That’s my main concern: How likely is it to find a career outside EA organizations that still allows me to work on altruistic goals, such as reducing AI s-risks?
In Taiwan, medical and dental school tuition is very cheap, so debt wouldn’t be an issue. In fact, I’m considering switching from medicine to dentistry, because dental residency is 2–4 years shorter than medical residency. Based on my estimation, after graduating it might take around 5 years to earn about $500k if I choose the dentist path.
I’m actually currently a first-year university student, double-majoring in medicine and computer science. (Different from the US, In Taiwan, medical education begins at the undergraduate level, and one obtains a doctor’s license after completing the medical program.)
I’ve still been struggling with a major decision: whether I should continue my double major in medicine or focus solely on computer science. In EA’s community’s reasoning, medicine seems less relevant to priorities like AI safety or s-risks. However, one major advantage of studying medicine is financial stability. Before transformative AI arrives, I suspect that computer science jobs might become increasingly competitive, whereas doctors may still earn a stable income. Therefore, in an uncertain future, I’ve considered working as a doctor temporarily (perhaps for around 10-15 years), saving most of the earnings to reduce future financial pressure.
(Although, I’m aware that future AI progress could eventually automate much of medical/dentist work.)
Therefore, if it’s really difficult to find EA jobs in non-EA companies, it would increase the argument of double majoring in medicine/dentist.
Some s-risks people may be afraid of informantion hazard of publicly answering this question, if that’s the case, you can gmail carlosgpt500@gmail.com to privately answer this question
The reason of thinking up this question (This is not directly related to the question so I put in common section here):
I’m currently an 18-year-old guy that’s having hard time between double majoring on medicine and CS(computer science) and single major on CS.
One advantage of medicine is its high salary. In fact my parents think it’s necessary to save retirement funds for like one million dollars, so they strongly advice me to do double major in medicine. Before, I thought: Is there need for retirement fund? I work for EA till my body can’t physically afford, and after that, I think I can suicide. Because if you can’t work anymore, what’s the meaning of living? However, there’s a flaw of this thinking, which is my question talking about.
So the prediction experts made are all pureb”subjective” predictions? I think there are some logical thinking/arguments or maybe like fermi estimation to explain how he estimates the number unless it’s mostly intuition.
Thanks for your replying. I’ve read this, but it doesn’t say the reason why we expect AI to keep growing at an explanatory speed. Though the computing speed (FLOPs) is accelerating like Moore’s law, but does fast computing=AGI can do most things on earth better than humans?
I mean “ever”, thanks for the question
Sorry, I tried to make different paragraphs in my writing, but it keeps omitting the spaces I made between each sentences automatically.
Or do people think the GPT system now is already very close to AGI? If so, what are the support arguments?(I’ve read Sparks of AGI by OpenAI)
Yes, I have read those and accepted the truth lots of people believe human level AGI will come in 20 years, and it’s just a matter of time. But I don’t know why people are so confident on this. Do people think the AI algorithms now are well enough to do most of the tasks on earth “theoretically”, and what we need are only fast computing speed?
Thanks for your reply, but my parents wouldn’t allow a 16-year-old kid to travel abroad by himself, and they cannot come with me either.
I wonder the opposite question: Why should we work in AI hell? I’m a 16 year-old boy, I’m AI outsider, we may have a big knowledge gap on AGI. I think it would we great if you can provide me some persuasive arguments to work in reducing s-risks. I have troubles reading the essays of s-risks online(have read CLR, CRS, Brian Tomasik, Magnus Vinding, Tobias Baumann’s) , because it’s too hard and theoretical for me. Also, there are some (basic) questions that I can’t find the answer: 1.How likely do you think AI can develop sentience, and it’s animal-like(I mean, the sentience contains suffering like animals)? What are the arguments? You keep talking on AI suffering, but it’s really hard to imagine AI suffer in common sense.
2.Can you list scenarios that even AI don’t become sentient at last, but it causes astronomical suffering for humans and animals?(Some I have heard of are threatening scenarios:when different AI systems threatens each other with causing human suffering and near-miss scenarios)
Thanks for your replying.
Thanks a lot by the sharing the article (This is not the same as I saw on 80000). So Dr.Lewis thought about”replacing a worse doctor”. There are some factors that would change the QALYs number: 1.Mentioned in Lewis’s article, different types of doctors contribute differently. Such as maybe doctors who treat cold don’t make big impact(because it’s easy, with SOP), but who treat cancers well do. 2.How bad are bad doctors? 3.The value of “curing a disease”, sometimes, curing a cancer doesn’t mean you prevent a person dying from cancer, because he might get another cancer in a few years. The positive impact relies on “reducing suffering”, but curing disease may only delay the suffering, especially for the seniors.(unless you die with less suffering, some dying ways are more suffering, like advanced cancers) If you consider the impact of patients’ relatives, prolonging patients’ lifespan might be good if their relatives feel sad about the patient’s death.
See this post before: https://forum.effectivealtruism.org/posts/vHsFTanptoRa4gEh8/asking-for-important-college-major-advices-should-i-study
In short, doctor seems like to have a positive impact to the society, though the impact may be little. But it’s better than working as a software engineer in a big tech company. If I decide to work in global health area, maybe I should study medicine to have a backup plan working as a doctor(not everyvody can be a researcher). But medicine requires high-time training and worse exit option to other jobs.
Thanks for your answering a lot. I’m really grateful for this.
Your response makes sense to me. However, if today you have to decide between earn to give (suppose you can donate $100000 USD a year) and to work directly in EA organisations, how can you make the decision given your donation ability and talent?
Of course if you have high talent you should work directly, but how do you decide if you only have average or low talent in your cause area?