Sorry, it seems some of you don’t agree with my opinion. Would you share your objections? I’m more of a negative utilitarianism, so I’m really not sure about how possible would reducing x-risks be not so good. I think people in EA can’t be too confident of ourselves, considering lots of non-EA philosophers/activists(VHEMT) also did a lot of thinking about human extinction, we should at least know&respect their opinion.
jackchang110
I think we should make a survey to investigate if EA-outsiders think reducing extinction risk is right or wrong, and how many people think the net value of future human is positive/negative. Though reducing x-risks may be the mainstream in EA, but we should respect outsider’s idea.
[Question] Is biology major a suitable choice for EAs?
[Question] Cause areas directly related to computer science besides AI risks?
[Question] Debates on reducing long-term s-risks?
Thanks. This helps really lot.
Is this question too hard to answer? Also, if we predict AGI will be banned or at least restricted in a few years, then, this problem won’t be that urgent, so the priority of AI safety will be lowered?
[Question] Predictions for future AI governance?
Thanks for your response and patience very much.
Actually, this is not just for persuading others, it’s actually persuading myself. I a CS outsider, and I really don’t understand why many people are confident that AGI will be created someday. 2.Of course predictions may not be accurate, and it’s a personal view. But I think there must be some reasons why you predict AGI is 50% in 2040, not 10%.
I’m sorry if I posted a dumb question here, but I don’t think it is. Are there any problems with the question?
[Question] How to persuade a non-CS background person to believe AGI is 50% possible in 2040?
[Question] What are the biggest obstacles on AI safety research career?
Should we priortize cognitive science in EA?
Thanks for your response, I agree s-risks on AI is very important, but :1. Is there a comparison to say it’s more important than with other s-risks areas(like: macrostategy, politics...) In AI area, is it AI sentience or AI safety for humans more important? Thus, what subjects should we work/learn in for s-risks? I’ve thought of working on “cognitive science”, because we have lots of moral uncertainty unsolved(like: the ratio of suffering to happiness, power of hedonic treadmill...) Neuroscience is developing quickly, too. If we can use neuroscience to understand the essence of “consiousness”, it can be used in AI/animal sentience, and AI moral alignment. But there seems to be fewer discussions about this?
I agree with the view about “urgency” is hard to be in the formula. Because urgency is not related to “good done/ extra person or dollar” for yourself.
An urgent problem means it can only be solved right now, so, if you don’t focus on the more urgent problem, the people in future can’t work on this, it may decrease the good things will have done by future people. But, I don’t know how to value the improtance of “urgency”.
[Question] Are there cause priortizations estimates for s-risks supporters?
[Question] Should we consider “urgency” as a factor of cause prioritization?
[Question] Should people get neuroscience phD to work in AI safety field?
Thanks for your response very much, it helps me a lot. I’m not so familiar in academics. You said we can solve pandemic risks by working in public health areas. Then what non-EA academia departments give AI safety or space governance researchers chance to work in? Thus, will there be problems when researching unpopular/long-term topics, such as: not getting enough funds, paper not cited enough by non-EAs, hard to promote to professor...?
Thank you for answering, some of the points are new that I haven’t considered about. What do you work in ML? AI safety? I want to know what EA-related things I can work in CS besides reducing AI x-risks. If I insist in working on those bio-related topics, is it still worth getting a CS major and fewer bio?(for bioinformatics and other CS-related skills)