(1) What is the probability of mankind, or a “good” successor species we turn into, surviving for the next 1000 years? (2) What is the probability of MIRI being the first organization to create an AGI smart enough to, say, be better at computer programming than any human?
(1) What is the probability of mankind, or a “good” successor species we turn into, surviving for the next 1000 years? (2) What is the probability of MIRI being the first organization to create an AGI smart enough to, say, be better at computer programming than any human?
(1) Not great. (2) Not great.
(To be clear, right now, MIRI is not attempting to build an AGI. Rather, we’re working towards a better theoretical understanding of the problem.)