So the prediction experts made are all pureb”subjective” predictions? I think there are some logical thinking/arguments or maybe like fermi estimation to explain how he estimates the number unless it’s mostly intuition.
jackchang110
Sorry, it looks like some people downvoted this question, I’d like to know if I asked a dumb question or I didn’t clarify my question. But, I think this is an important question, because if you keep in trouble getting funds with academics(It’s common in Taiwan that you don’t have enough funding), you can’t finish your research
I think we should make a survey to investigate if EA-outsiders think reducing extinction risk is right or wrong, and how many people think the net value of future human is positive/negative. Though reducing x-risks may be the mainstream in EA, but we should respect outsider’s idea.
Sorry, it seems some of you don’t agree with my opinion. Would you share your objections? I’m more of a negative utilitarianism, so I’m really not sure about how possible would reducing x-risks be not so good. I think people in EA can’t be too confident of ourselves, considering lots of non-EA philosophers/activists(VHEMT) also did a lot of thinking about human extinction, we should at least know&respect their opinion.
I mean “ever”, thanks for the question
Sorry, I tried to make different paragraphs in my writing, but it keeps omitting the spaces I made between each sentences automatically.
I wonder the opposite question: Why should we work in AI hell? I’m a 16 year-old boy, I’m AI outsider, we may have a big knowledge gap on AGI. I think it would we great if you can provide me some persuasive arguments to work in reducing s-risks. I have troubles reading the essays of s-risks online(have read CLR, CRS, Brian Tomasik, Magnus Vinding, Tobias Baumann’s) , because it’s too hard and theoretical for me. Also, there are some (basic) questions that I can’t find the answer: 1.How likely do you think AI can develop sentience, and it’s animal-like(I mean, the sentience contains suffering like animals)? What are the arguments? You keep talking on AI suffering, but it’s really hard to imagine AI suffer in common sense.
2.Can you list scenarios that even AI don’t become sentient at last, but it causes astronomical suffering for humans and animals?(Some I have heard of are threatening scenarios:when different AI systems threatens each other with causing human suffering and near-miss scenarios)
Thanks for your replying.
Thanks a lot by the sharing the article (This is not the same as I saw on 80000). So Dr.Lewis thought about”replacing a worse doctor”. There are some factors that would change the QALYs number: 1.Mentioned in Lewis’s article, different types of doctors contribute differently. Such as maybe doctors who treat cold don’t make big impact(because it’s easy, with SOP), but who treat cancers well do. 2.How bad are bad doctors? 3.The value of “curing a disease”, sometimes, curing a cancer doesn’t mean you prevent a person dying from cancer, because he might get another cancer in a few years. The positive impact relies on “reducing suffering”, but curing disease may only delay the suffering, especially for the seniors.(unless you die with less suffering, some dying ways are more suffering, like advanced cancers) If you consider the impact of patients’ relatives, prolonging patients’ lifespan might be good if their relatives feel sad about the patient’s death.
See this post before: https://forum.effectivealtruism.org/posts/vHsFTanptoRa4gEh8/asking-for-important-college-major-advices-should-i-study
In short, doctor seems like to have a positive impact to the society, though the impact may be little. But it’s better than working as a software engineer in a big tech company. If I decide to work in global health area, maybe I should study medicine to have a backup plan working as a doctor(not everyvody can be a researcher). But medicine requires high-time training and worse exit option to other jobs.
I’ve read an interview with Gregory Lewis on 80000 hours. He argued that due to counterfactual, doctors don’t make a big difference because doctors are already highly competitive, so you don’t make big impact especially if you work in rich countries. There’s a problem: You can still make a difference by being a better and more patient doctor. I don’t know the doctors in America, but in East Asia, not every doctors are good. Some just want to make money and they treat the patient poorly, making them suffer more.(such as misdiagnosis) So, if you can be a good doctor, the counterfactual case would be” You replace a worse doctor than you”. I don’t know how valuable would it be, but this shows doctors in rich countries may be more altruistic than normal careers. Being a biology researcher may be more valuable than just being a clinical doctor in the long run, but, I think we may underestimate a doctor’s impact. How do you think? (This was a front page post, but someone suggested me that posting it here would be better)
This question is important to me, it affects my major/career decision. Some of you downvoted this post, I’d like to know the mistakes of my thoughts, please share your opinions.
Will AGI development be restricted by physics and semiconductor wafer? I don’t know how AI was developing so fast in the history, but some said it’s because the Moore’s Law of seniconductor wafer. If the development of semiconductor wafer comes to an end because of physical limitations l, can AI still grow exponentially?
Do you understand what I wrote on my whiteboard?
Thanks for answering, I respect your value about x-risks (I’d consider if I was wrong)
Thanks. This helps really lot.
Is this question too hard to answer? Also, if we predict AGI will be banned or at least restricted in a few years, then, this problem won’t be that urgent, so the priority of AI safety will be lowered?
Thanks for your response and patience very much.
Actually, this is not just for persuading others, it’s actually persuading myself. I a CS outsider, and I really don’t understand why many people are confident that AGI will be created someday. 2.Of course predictions may not be accurate, and it’s a personal view. But I think there must be some reasons why you predict AGI is 50% in 2040, not 10%.
I’m sorry if I posted a dumb question here, but I don’t think it is. Are there any problems with the question?
What’s the expected value of working in AI safety?
I’m not certain about longtermism and the value of reducing x-risks, I’m not optimistic that we can really affect the long future, and I guess the future of humanity may be bad. Many EA people are like me, that’s why only 15% people think AI safety is top cause area(survey by Rethink Priority).
However, In a “near-termist” view, AI safety research is still valuable because researching it can may avoid catastrophe(not only extinction), which causes the suffering of 8 billion people and maybe animals. But, things like researching on global health, preventing pandemic seems to have a more certain “expected value”(Maybe 100 QALY/extra person or so). Because we have our history experiences and a feedback loop. AI safety is the most difficult problem on earth, I feel like the expected value is like”???” It may be very high, may be 0. We don’t know how serious suffering it would make(it may cause extinction in a minute when we’re sleeping, or torture us for years?) We don’t know if we are on the way finding the soultion, or we are all doing the wrong predictions of AGI’s thoughts? Will the government control the power of AGI? All of the work on AI safety is kind of “guessing”, so I’m confused why 80000 hours estimates the tracability to be 1%. I know AI safety is highly neglected, and it may cause unpredictable huge suffering for human and animals. But if I work in AI safety, I’d feel a little lost becuase I don’t know if I really did something meaningful, if I don’t work in AI safety, I’d feel guilty. Could some give me(and the people who hestitates to work in AI safety) some recommendations?