Thanks for your replying. I’ve read this, but it doesn’t say the reason why we expect AI to keep growing at an explanatory speed. Though the computing speed (FLOPs) is accelerating like Moore’s law, but does fast computing=AGI can do most things on earth better than humans?
jackchang110
[Question] Should effective altruists learn a wide range of subjects?
[Question] Why is learning economics, psychology, sociology important for preventing AI risks?
[Question] Why is the impact of medical researcher bigger than clinical doctors?
[Question] The suffering scale of human diseases vs factory farming
[Question] Does improving global health cause meat-eater problem?
[Question] Do you think the probability of future AI sentience(suffering) is >0.1%? Why?
[Question] What’s the exact way you predict probability of AI extinction?
I mean “ever”, thanks for the question
Sorry, I tried to make different paragraphs in my writing, but it keeps omitting the spaces I made between each sentences automatically.
[Question] Why most people in EA are confident that AI will surpass humans?
Or do people think the GPT system now is already very close to AGI? If so, what are the support arguments?(I’ve read Sparks of AGI by OpenAI)
Yes, I have read those and accepted the truth lots of people believe human level AGI will come in 20 years, and it’s just a matter of time. But I don’t know why people are so confident on this. Do people think the AI algorithms now are well enough to do most of the tasks on earth “theoretically”, and what we need are only fast computing speed?
[Question] Asking for online resources why AI now is near AGI
Thanks for your reply, but my parents wouldn’t allow a 16-year-old kid to travel abroad by himself, and they cannot come with me either.
[Question] Asking for online calls on AI s-risks discussions
I wonder the opposite question: Why should we work in AI hell? I’m a 16 year-old boy, I’m AI outsider, we may have a big knowledge gap on AGI. I think it would we great if you can provide me some persuasive arguments to work in reducing s-risks. I have troubles reading the essays of s-risks online(have read CLR, CRS, Brian Tomasik, Magnus Vinding, Tobias Baumann’s) , because it’s too hard and theoretical for me. Also, there are some (basic) questions that I can’t find the answer: 1.How likely do you think AI can develop sentience, and it’s animal-like(I mean, the sentience contains suffering like animals)? What are the arguments? You keep talking on AI suffering, but it’s really hard to imagine AI suffer in common sense.
2.Can you list scenarios that even AI don’t become sentient at last, but it causes astronomical suffering for humans and animals?(Some I have heard of are threatening scenarios:when different AI systems threatens each other with causing human suffering and near-miss scenarios)
Thanks for your replying.
Thanks a lot by the sharing the article (This is not the same as I saw on 80000). So Dr.Lewis thought about”replacing a worse doctor”. There are some factors that would change the QALYs number: 1.Mentioned in Lewis’s article, different types of doctors contribute differently. Such as maybe doctors who treat cold don’t make big impact(because it’s easy, with SOP), but who treat cancers well do. 2.How bad are bad doctors? 3.The value of “curing a disease”, sometimes, curing a cancer doesn’t mean you prevent a person dying from cancer, because he might get another cancer in a few years. The positive impact relies on “reducing suffering”, but curing disease may only delay the suffering, especially for the seniors.(unless you die with less suffering, some dying ways are more suffering, like advanced cancers) If you consider the impact of patients’ relatives, prolonging patients’ lifespan might be good if their relatives feel sad about the patient’s death.
See this post before: https://forum.effectivealtruism.org/posts/vHsFTanptoRa4gEh8/asking-for-important-college-major-advices-should-i-study
In short, doctor seems like to have a positive impact to the society, though the impact may be little. But it’s better than working as a software engineer in a big tech company. If I decide to work in global health area, maybe I should study medicine to have a backup plan working as a doctor(not everyvody can be a researcher). But medicine requires high-time training and worse exit option to other jobs.
So the prediction experts made are all pureb”subjective” predictions? I think there are some logical thinking/arguments or maybe like fermi estimation to explain how he estimates the number unless it’s mostly intuition.