Total mixed bag of questions, feel free to answer any/all. Apologies if you’ve already written on the subject elsewhere; feel free to just link if so.
What is your current marginal project(s)? How much will they cost, and what’s the expected output (if they get funded)
What is the biggest mistake you’ve made?
What is the biggest mistake you think others make?
What do you think about the costs and benefits of publishing in journals as strategy?
Do you think the world has become better or worse over time? How? Why?
Do you think the world has become more or less at risk over time? How? Why?
What you think about Value Drift?
What do you think will be the impact of the Elon Musk money?
How do you think about weighing future value vs current value?
What do you think about population growth/stagnation?
Why did you found a new institute rather than joining an existing one?
Are there any GCRs you are worried about that would not involve a high deathcount?
What’s your probability distribution for GCR timescale?
Personal question, feel free to disregard, but this is an AMA:
How has concern about GCR’s affected your personal life, beyond the obvious. Has it affected your retirement savings? Do you plan / already have children?
Total mixed bag of questions, feel free to answer any/all. Apologies if you’ve already written on the subject elsewhere; feel free to just link if so.
No worries.
What is your current marginal project(s)? How much will they cost, and what’s the expected output (if they get funded)
We’re currently fundraising in particular for integrated assessment, http://gcrinstitute.org/integrated-assessment. Most institutional funders have programs on only one risk at a time. We’re patching integrated assessment work from other projects, but hope to get more dedicated integrated assessment funding. Something up to around $1M/yr would probably suit us well for now, but this is significantly higher than what we currently have, and every dollar helps.
What is the biggest mistake you’ve made?
This is actually an easy one, since we just finished shifting our focus. The biggest mistake we made was letting ourselves get caught up on an ad hoc, unfocused mix of projects, instead of prioritizing better. The integrated assessment is now our core means of prioritizing. See more at http://gcrinstitute.org/february-newsletter-new-directions-for-gcri.
What is the biggest mistake you think others make?
Well, most people make the mistake of not focusing mainly on gcr reduction. Within the gcr community, I think the biggest mistake is not focusing on how best to reduce the risks. Instead a lot of people focus on the risks themselves.
What do you think about the costs and benefits of publishing in journals as strategy?
We publish mainly in academic journals. It takes significant extra effort and introduces delays, but it almost always improves the quality of the final product, it attracts a wider audience, it can be used more widely, and it has significant reputation benefits. But we make heavy use of our academic careers and credentials. It’s not for everyone, and that’s OK.
Do you think the world has become better or worse over time? How? Why?
It’s become better and worse. Population, per capita quality of life, and values seem to be improving. But risks are piling up.
Do you think the world has become more or less at risk over time? How? Why?
More, due mainly to technological and environmental change. Opportunities are also increasing. The opportunities are all around us (for example, the internet), but the risks can be so enormous.
What you think about Value Drift?
Define?
What do you think will be the impact of the Elon Musk money?
It depends on what proposals they get, but I’m cautiously optimistic that this will really help develop a culture of responsibility and safety among AI researchers. More so because it’s not just money—FLI and others are actively nurturing relationships.
How do you think about weighing future value vs current value?
What do you think about population growth/stagnation?
I don’t get too worried about it.
Why did you found a new institute rather than joining an existing one?
Because Tony Barrett and I didn’t see any existing institutes capable of working on gcr they way we thought it should be done, in particular working across all the risks with rigorous risk analysis & risk management methodology.
Are there any GCRs you are worried about that would not involve a high deathcount?
What’s your probability distribution for GCR timescale?
I’m not sure what you mean by that, but at any rate, I don’t have confident estimates for specific probabilities.
Personal question, feel free to disregard, but this is an AMA: How has concern about GCR’s affected your personal life, beyond the obvious. Has it affected your retirement savings? Do you plan / already have children?
It hasn’t affected things like retirement or children. Maybe it should, but it hasn’t. The bigger factor is not gcr per se but fanatacism towards helping others. I push myself pretty hard, but I would probably be doing the same if I was focusing on, say, global poverty or animal welfare instead of gcr.
Total mixed bag of questions, feel free to answer any/all. Apologies if you’ve already written on the subject elsewhere; feel free to just link if so.
What is your current marginal project(s)? How much will they cost, and what’s the expected output (if they get funded)
What is the biggest mistake you’ve made?
What is the biggest mistake you think others make?
What do you think about the costs and benefits of publishing in journals as strategy?
Do you think the world has become better or worse over time? How? Why?
Do you think the world has become more or less at risk over time? How? Why?
What you think about Value Drift?
What do you think will be the impact of the Elon Musk money?
How do you think about weighing future value vs current value?
What do you think about population growth/stagnation?
Why did you found a new institute rather than joining an existing one?
Are there any GCRs you are worried about that would not involve a high deathcount?
What’s your probability distribution for GCR timescale?
Personal question, feel free to disregard, but this is an AMA:
How has concern about GCR’s affected your personal life, beyond the obvious. Has it affected your retirement savings? Do you plan / already have children?
No worries.
We’re currently fundraising in particular for integrated assessment, http://gcrinstitute.org/integrated-assessment. Most institutional funders have programs on only one risk at a time. We’re patching integrated assessment work from other projects, but hope to get more dedicated integrated assessment funding. Something up to around $1M/yr would probably suit us well for now, but this is significantly higher than what we currently have, and every dollar helps.
This is actually an easy one, since we just finished shifting our focus. The biggest mistake we made was letting ourselves get caught up on an ad hoc, unfocused mix of projects, instead of prioritizing better. The integrated assessment is now our core means of prioritizing. See more at http://gcrinstitute.org/february-newsletter-new-directions-for-gcri.
Well, most people make the mistake of not focusing mainly on gcr reduction. Within the gcr community, I think the biggest mistake is not focusing on how best to reduce the risks. Instead a lot of people focus on the risks themselves.
We publish mainly in academic journals. It takes significant extra effort and introduces delays, but it almost always improves the quality of the final product, it attracts a wider audience, it can be used more widely, and it has significant reputation benefits. But we make heavy use of our academic careers and credentials. It’s not for everyone, and that’s OK.
It’s become better and worse. Population, per capita quality of life, and values seem to be improving. But risks are piling up.
More, due mainly to technological and environmental change. Opportunities are also increasing. The opportunities are all around us (for example, the internet), but the risks can be so enormous.
Define?
It depends on what proposals they get, but I’m cautiously optimistic that this will really help develop a culture of responsibility and safety among AI researchers. More so because it’s not just money—FLI and others are actively nurturing relationships.
All units of intrinsic value should be weighted equally regardless of location in time or space. (Intrinsic value: see http://sethbaum.com/ac/2012_Value-CBA.html.)
I don’t get too worried about it.
Because Tony Barrett and I didn’t see any existing institutes capable of working on gcr they way we thought it should be done, in particular working across all the risks with rigorous risk analysis & risk management methodology.
Totalitarianism is one. Another plausible one is toxic chemicals, but this might not be big enough to merit that level of concern. On toxics, see http://sethbaum.com/ac/2014_Rev-Grandjean.pdf.
I’m not sure what you mean by that, but at any rate, I don’t have confident estimates for specific probabilities.
It hasn’t affected things like retirement or children. Maybe it should, but it hasn’t. The bigger factor is not gcr per se but fanatacism towards helping others. I push myself pretty hard, but I would probably be doing the same if I was focusing on, say, global poverty or animal welfare instead of gcr.