I mean “ever”, thanks for the question
Sorry, I tried to make different paragraphs in my writing, but it keeps omitting the spaces I made between each sentences automatically.
[Question] Why most people in EA are confident that AI will surpass humans?
Or do people think the GPT system now is already very close to AGI? If so, what are the support arguments?(I’ve read Sparks of AGI by OpenAI)
Yes, I have read those and accepted the truth lots of people believe human level AGI will come in 20 years, and it’s just a matter of time. But I don’t know why people are so confident on this. Do people think the AI algorithms now are well enough to do most of the tasks on earth “theoretically”, and what we need are only fast computing speed?
[Question] Asking for online resources why AI now is near AGI
Thanks for your reply, but my parents wouldn’t allow a 16-year-old kid to travel abroad by himself, and they cannot come with me either.
[Question] Asking for online calls on AI s-risks discussions
I wonder the opposite question: Why should we work in AI hell? I’m a 16 year-old boy, I’m AI outsider, we may have a big knowledge gap on AGI. I think it would we great if you can provide me some persuasive arguments to work in reducing s-risks. I have troubles reading the essays of s-risks online(have read CLR, CRS, Brian Tomasik, Magnus Vinding, Tobias Baumann’s) , because it’s too hard and theoretical for me. Also, there are some (basic) questions that I can’t find the answer: 1.How likely do you think AI can develop sentience, and it’s animal-like(I mean, the sentience contains suffering like animals)? What are the arguments? You keep talking on AI suffering, but it’s really hard to imagine AI suffer in common sense.
2.Can you list scenarios that even AI don’t become sentient at last, but it causes astronomical suffering for humans and animals?(Some I have heard of are threatening scenarios:when different AI systems threatens each other with causing human suffering and near-miss scenarios)
Thanks for your replying.
Thanks a lot by the sharing the article (This is not the same as I saw on 80000). So Dr.Lewis thought about”replacing a worse doctor”. There are some factors that would change the QALYs number: 1.Mentioned in Lewis’s article, different types of doctors contribute differently. Such as maybe doctors who treat cold don’t make big impact(because it’s easy, with SOP), but who treat cancers well do. 2.How bad are bad doctors? 3.The value of “curing a disease”, sometimes, curing a cancer doesn’t mean you prevent a person dying from cancer, because he might get another cancer in a few years. The positive impact relies on “reducing suffering”, but curing disease may only delay the suffering, especially for the seniors.(unless you die with less suffering, some dying ways are more suffering, like advanced cancers) If you consider the impact of patients’ relatives, prolonging patients’ lifespan might be good if their relatives feel sad about the patient’s death.
See this post before: https://forum.effectivealtruism.org/posts/vHsFTanptoRa4gEh8/asking-for-important-college-major-advices-should-i-study
In short, doctor seems like to have a positive impact to the society, though the impact may be little. But it’s better than working as a software engineer in a big tech company. If I decide to work in global health area, maybe I should study medicine to have a backup plan working as a doctor(not everyvody can be a researcher). But medicine requires high-time training and worse exit option to other jobs.
I’ve read an interview with Gregory Lewis on 80000 hours. He argued that due to counterfactual, doctors don’t make a big difference because doctors are already highly competitive, so you don’t make big impact especially if you work in rich countries. There’s a problem: You can still make a difference by being a better and more patient doctor. I don’t know the doctors in America, but in East Asia, not every doctors are good. Some just want to make money and they treat the patient poorly, making them suffer more.(such as misdiagnosis) So, if you can be a good doctor, the counterfactual case would be” You replace a worse doctor than you”. I don’t know how valuable would it be, but this shows doctors in rich countries may be more altruistic than normal careers. Being a biology researcher may be more valuable than just being a clinical doctor in the long run, but, I think we may underestimate a doctor’s impact. How do you think? (This was a front page post, but someone suggested me that posting it here would be better)
This question is important to me, it affects my major/career decision. Some of you downvoted this post, I’d like to know the mistakes of my thoughts, please share your opinions.
Thinking more on a doctor’s value
Will AGI development be restricted by physics and semiconductor wafer? I don’t know how AI was developing so fast in the history, but some said it’s because the Moore’s Law of seniconductor wafer. If the development of semiconductor wafer comes to an end because of physical limitations l, can AI still grow exponentially?
Sorry, it looks like some people downvoted this question, I’d like to know if I asked a dumb question or I didn’t clarify my question. But, I think this is an important question, because if you keep in trouble getting funds with academics（It’s common in Taiwan that you don’t have enough funding）, you can’t finish your research
[Question] Is it harder to get animal welfare research funds in academics?
Hello Ben: Thanks for your answering a lot. I really need for advices about this. If you’d like to, maybe you can share more opinion about my other considerations on the article. I still don’t have strong enough arguments to make a decision now.
1.Thanks for your sharing about changing fields in bioinformatics to web app developing.
I don’t know about CS, but I think there are a lot of subfields in CS: Such as software, hardware, informatics, firmware.. Even software engineers has specailities at different programming languages. Any kinds of careers need speciality. So I really don’t know how it differs as working on a CS field in different companies. Abut how many years does it take for learning/ building new experience in bioinformatics changing to big tech companies? What about changing to machine learning research? Is it feasible to change career at an older age, like 50?(if I have enough money)
2.Are there any more examples that show the impactful cause areas change quickly by time?
Finally, I think it’s hard to say you’re interested in something until you learn some harder courses in college(especially for me, because I’m intersted in a lot of fields). So I still need to make college decisions first.
Hello Ben(what if you could give 20% of your income? Would it be double more impactful)
1.Thanks for answering, there are fewer people in EA working at biology field.
2.ETG is really somethink we can conisder about, according to Toby Ord’s podcasts on 80000 hours, he said talent gaps are much needed instead of funding gaps, most EA comapnies would rather get a great worker rather than getting $100000 donation,(but things like animal welfare may be different, its funding gaps are bigger). You should also consider careers like biology professor, if you’re a good one, you’re actually winning research funds for important topics like malaria research for EAs, too.
3.Yeah, of course medicine gives you social skills and medicine knowledge can be used more medical research I don’t know medicine, but I doubt:(i)If you’re working in non medical field(such as animal welfare, lab-grown meat), do you need those detailed clinical medicine knowledge?(ii)Won’t working as a resaercher(bioinformatics engineer) better for building your career capital rather than being a doctor? The two areas require different experiences.
4.As my article, is medicine a more narrow subject? CS is more useful than biology, because every company needs CS employees, but maybe not biology. An medicine is only human-biology. You don’t need medicine to work at AI risks or climate change.
Sorry if my comment showed disrepect for someone who is expert as medicine.
Thanks for your replying. I’ve read this, but it doesn’t say the reason why we expect AI to keep growing at an explanatory speed. Though the computing speed (FLOPs) is accelerating like Moore’s law, but does fast computing=AGI can do most things on earth better than humans?