This is a really useful overview of crucial questions that have a ton of applications for conscientious longtermists!
The plan for future work seems even more interesting though. Some measures have beneficial effects for a broad range of cause-areas, and others less so. It would be very interesting to see how a set of interventions do in a cost-benefit analysis where interconnections are taken into account.
It would also be super-interesting to see the combined quantitative assessments of a thoughtful group of longtermist’s answers to some of these questions. A series of surveys and some work in sheets could go a long way towards giving us a better picture of where our aims should be.
It would also be super-interesting to see the combined quantitative assessments of a thoughtful group of longtermist’s answers to some of these questions. A series of surveys and some work in sheets could go a long way towards giving us a better picture of where our aims should be.
I used to think similarly, but now am more skeptical about quantitative information on longtermists’ beliefs.
[ETA: On a second reading, maybe the tone of this comment is too negative. I still think there is value in some surveys, specifically if they focus on a small number of carefully selected questions for a carefully selected audience. Whereas before my view had been closer to “there are many low-hanging fruits in the space of possible surveys, and doing even quickly executed versions of most surveys will have a lot of value.”]
I’ve run internal surveys on similar questions at both FRI (now Center on Longterm Risk) and the Future of Humanity Institute. I’ve found it very hard to draw any object-level conclusions from the results, and certainly wouldn’t feel comfortable for the results to directly influence personal or organizational goals. I feel like my main takeaways were:
It’s very hard to figure out what exactly to ask about. E.g. how to operationalize different types of AI risk?
Even once you’ve settled on some operationalization, people will interpret it differently. It’s very hard to avoid this.
There usually is a very large amount of disagreement between people.
Based on my own experience of filling in such surveys and anecdotal feedback, I’m not sure how much to trust the answers if at all. I think many people simply don’t have stable views on the quantitative values one wants to ask about, and essentially ‘make up’ an answer that may be mostly determined by psychological substitution.
(These are also sufficient reasons for why I’ve never published the results of such surveys, though sometimes there were also other reasons.)
On reflection, maybe this isn’t that surprising: e.g. how to delineate different types of AI risk is an active topic of research, and people write long texts about it; some people have disagreed for years, and don’t fully understand each others’ views even though they’ve tried for dozens of hours. It would be fairly surprising if the ask to fill in a survey would make the fundamental uncertainty and confusion suggested by this background go away.
Thanks for sharing your thoughts. I feel uncertain about how valuable it’d be to collect quantitative info about people’s beliefs on questions like these, and your comment has provided useful a input/perspective on that matter.
Some thoughts/questions in response:
Do you think it’s not even net positive to collect such info (e.g., because people end up anchoring on the results or perceiving the respondents as simplistic thinkers)? Or do you just think it’s unclear that it’s net positive enough to justify the time required (from the survey organiser and from the respondents)?
Do you think such info doesn’t even reduce our uncertainty and confusion at all? Or just that it only reduces it by a small amount?
Relatedly, I have an impression that people sometimes deny the value of quantitative estimates/forecasts in general based on seeming to view us as simply either “uncertain” or “certain” on a given matter (e.g., “we’ll still have no idea at all”). In contrast, I think we always have some but not complete uncertainty, and that we can often/always move closer to certainty by small increments.
That said, one can share that view of mine and yet think these estimates/forecasts (or any other particular thing) don’t help us move closer to certainty at all.
It seems to me that those takeaways are not things everyone is (viscerally) aware of, and that they’re things it’s valuable for people to be (viscerally) aware of. So it seems to me plausible that these seemingly disappointing takeaways actually indicate some value to these efforts. Does that sound right to you?
E.g., I wouldn’t be surprised if a large portion of people who don’t work at places like FHI wouldn’t realise that it’s hard to know how to even operationalise different types of AI risk, and would expect that people at FHI all agree pretty closely on some of these questions.
And I wouldn’t be super surprised if even some people who do work at places like FHI thought operationalisations would be relatively easy, agreement would be pretty high, etc. Though I don’t really know.
That said, there may be other, cheaper ways to spread those takeaways. E.g., perhaps, simply having a meeting where those points are discussed explicitly but qualitatively, and then releasing a statement on the matter.
Would you apply similar thinking to the question of how valuable existential risk estimates in particular are? I’d imagine so? Does this mean you see the database of existential risk estimates as of low or negative value?
I ask this question genuinely rather than defensively. I’m decently confident the database is net positive, but very uncertain about how positive, and open to the idea that it’s net negative.
Do you think it’s not even net positive to collect such info (e.g., because people end up anchoring on the results or perceiving the respondents as simplistic thinkers)? Or do you just think it’s unclear that it’s net positive enough to justify the time required (from the survey organiser and from the respondents)?
Personally, I think it’s net positive but not worth the time investment in most cases. But based on feedback some other people think it’s net negative, at least when not executed exceptionally well—mostly due to anchoring, projecting a sense of false confidence, risk of numbers being quoted out of context etc.
Do you think such info doesn’t even reduce our uncertainty and confusion at all? Or just that it only reduces it by a small amount?
I think an idealized survey would reduce uncertainty a bit. But in practice I think it’s too hard to tell the signal apart from the noise, and so that it basically doesn’t reduce object-level uncertainty at all. I’m more positive about the results providing some high-level takeaways (e.g. “people disagree a lot”) or identifying specific disagreements (e.g. “these two people disagree a lot on that specific question”).
It seems to me that those takeaways are not things everyone is (viscerally) aware of, and that they’re things it’s valuable for people to be (viscerally) aware of. So it seems to me plausible that these seemingly disappointing takeaways actually indicate some value to these efforts. Does that sound right to you?
Yes, that sounds right to me. I think it’s a bit tricky to get the message right though. I think I’d want to roughly convey a (more nuanced version of) “we still need people who can think through questions themselves and form their own views, not just people who seek guidance from some consensus which on many questions may not exist”. (Buck’s post on deference and inside-view models is somewhat related.) But it’s tricky to avoid pessimistic/non-constructive impressions like “people have no idea what they’re talking about, so we should stop giving any weight to them” or “we don’t know anything and so can’t do anything about improving the longterm future”.
I also do feel a bit torn about the implications myself. After all, the survey issues mostly indicate a failure of a specific way of making beliefs explicit, not necessarily a practical defect in those beliefs themselves. (Weird analogy: if you survey carpenters on weird questions about tables, maybe they also won’t give very useful replies, but they might still be great at building tables.) And especially if we’re pessimistic about the tractability of reducing confusion, then maybe advice along the lines of (e.g.) “try to do useful AI safety work even if you can’t give super clear justifications for what you’re doing and don’t fully understand the views of many of your peers” is among the best generic advice we can give, despite some remaining unease from people who are temperamentally maths/analytic philosopher types such as myself.
Would you apply similar thinking to the question of how valuable existential risk estimates in particular are? I’d imagine so? Does this mean you see the database of existential risk estimates as of low or negative value?
I think a database is valuable precisely because it shows a range of estimates, including the fact that different estimates sometimes diverge a lot.
Regarding existential risk estimates, I do see value in doing research on specific questions that would make us adjust those estimates, and then adjusting them accordingly. But this is probably not among the top criteria I’d use to pick research questions, and usually I’d expect most of the value to come from other sources (e.g. identifying potential interventions/solutions, field building or other indirect effects, …). The reason mostly is that I’m skeptical marginal research will change “consensus estimates” by enough that the change in the quantitative probability by itself will have practical consequences. E.g. I think it mostly doesn’t matter for practical purposes if you think the risk of extinction from AI this century is, say, 8% or 10% (making up numbers, not my beliefs). If I thought there was a research project that would cause most people to revise that estimate to, say, 0.1% I do think this would be super valuable. But I don’t think there is such a research project. (There are already both people whose credences are 0.1% and 10%, respectively, but the issue is they don’t fully understand each other, disagree about how to interpret the evidence etc. - and additional research wouldn’t significantly change this.)
Again, I do think there are various valuable research projects that would inform our views on how likely extinction from AI is, among other things. But I’d expect most of the value to come from things other than moving that specific credence.
In any case, all of these things are very different from asking someone who hasn’t done such research to fill in a survey. I think surveying more people on what their x-risk credences are will have ~zero or even negative epistemic value for the purpose of improving our x-risk estimates. Instead, we’d need to identify specific research questions, have people spend a long time doing the required research, and then ask those specific people. (So e.g. I think Ord’s estimate have positive epistemic value, and they also would if he stated them in a survey—the point is that this is because he has spent a lot of time deriving these specific estimates. But if you survey people, even longtermist researchers, most of them won’t have done such research, and even if they have lots of thoughts on relevant questions if you ask them to give a number they haven’t previously derived with great care they’ll essentially ‘make it up’.)
I think I largely agree, except that I think I’m on the fence about the last paragraph.
Regarding existential risk estimates, I do see value in doing research on specific questions that would make us adjust those estimates, and then adjusting them accordingly.
I agree with what you say in this paragraph. But it seems somewhat separate to the question of how valuable it is to elicit and collate current views?
I think my views are roughly as follows:
“Most relevant experts are fairly confident that certain existential risks (e.g., from AI) as substantially more likely than others (e.g., from asteroids or gamma ray bursts). The vast majority of people—and a substantial portion of EAs, longtermists, policymakers, etc. - probably aren’t aware experts think that, and might guess that the difference in risk levels is less substantial, or be unable to guess which risks are most likely. (This seems analogous to the situation with large differences in charity cost-effectiveness.) Therefore, eliciting and collecting experts’ views can provide a useful input into other people’s prioritisation decisions.
That said, on the margin, it’ll be very hard to shift the relevant experts’ credences on x-risk levels by more than, for example, a factor of two. And there are often already larger differences in other factors in our decisions—e.g., tractability of or personal fit for interventions. In addition, we don’t know how much weight to put on experts’ specific credences anyway. So there’s not that much value in trying to further inform the relevant experts’ credences on x-risk levels. (Though the same work that would do that might be very valuable for other reasons, like helping those experts build more detailed models of how risks would occur and what the levers for intervention are.)”
Does that roughly match your views?
If I thought there was a research project that would cause most people to revise that estimate to, say, 0.1% I do think this would be super valuable.
Just to check, I assume you mean that there’d be a lot of value in a research project that would cause most people to revise that estimate to (say) 0.1%, if indeed the best estimate is (say) 0.1%, and that wouldn’t cause such a revision otherwise?
One alternative thing you might mean: “I think the best estimate is 0.1%, and I think a research project that would cause most people to realise that would be super valuable.” But I’m guessing that’s not what you mean?
Yes, that sounds roughly right. I hadn’t thought about the value for communicating with broader audiences.
Just to check, I assume you mean that there’d be a lot of value in a research project that would cause most people to revise that estimate to (say) 0.1%, if indeed the best estimate is (say) 0.1%, and that wouldn’t cause such a revision otherwise?
Yes, that’s what I meant.
(I think my own estimate is somewhere between 0.1% and 10% FWIW, but also feels quite unstable and like I don’t trust that number much.)
This is a really useful overview of crucial questions that have a ton of applications for conscientious longtermists!
The plan for future work seems even more interesting though. Some measures have beneficial effects for a broad range of cause-areas, and others less so. It would be very interesting to see how a set of interventions do in a cost-benefit analysis where interconnections are taken into account.
It would also be super-interesting to see the combined quantitative assessments of a thoughtful group of longtermist’s answers to some of these questions. A series of surveys and some work in sheets could go a long way towards giving us a better picture of where our aims should be.
Looking forward to seeing more work on this area!
I used to think similarly, but now am more skeptical about quantitative information on longtermists’ beliefs.
[ETA: On a second reading, maybe the tone of this comment is too negative. I still think there is value in some surveys, specifically if they focus on a small number of carefully selected questions for a carefully selected audience. Whereas before my view had been closer to “there are many low-hanging fruits in the space of possible surveys, and doing even quickly executed versions of most surveys will have a lot of value.”]
I’ve run internal surveys on similar questions at both FRI (now Center on Longterm Risk) and the Future of Humanity Institute. I’ve found it very hard to draw any object-level conclusions from the results, and certainly wouldn’t feel comfortable for the results to directly influence personal or organizational goals. I feel like my main takeaways were:
It’s very hard to figure out what exactly to ask about. E.g. how to operationalize different types of AI risk?
Even once you’ve settled on some operationalization, people will interpret it differently. It’s very hard to avoid this.
There usually is a very large amount of disagreement between people.
Based on my own experience of filling in such surveys and anecdotal feedback, I’m not sure how much to trust the answers if at all. I think many people simply don’t have stable views on the quantitative values one wants to ask about, and essentially ‘make up’ an answer that may be mostly determined by psychological substitution.
(These are also sufficient reasons for why I’ve never published the results of such surveys, though sometimes there were also other reasons.)
On reflection, maybe this isn’t that surprising: e.g. how to delineate different types of AI risk is an active topic of research, and people write long texts about it; some people have disagreed for years, and don’t fully understand each others’ views even though they’ve tried for dozens of hours. It would be fairly surprising if the ask to fill in a survey would make the fundamental uncertainty and confusion suggested by this background go away.
Thanks for sharing your thoughts. I feel uncertain about how valuable it’d be to collect quantitative info about people’s beliefs on questions like these, and your comment has provided useful a input/perspective on that matter.
Some thoughts/questions in response:
Do you think it’s not even net positive to collect such info (e.g., because people end up anchoring on the results or perceiving the respondents as simplistic thinkers)? Or do you just think it’s unclear that it’s net positive enough to justify the time required (from the survey organiser and from the respondents)?
Do you think such info doesn’t even reduce our uncertainty and confusion at all? Or just that it only reduces it by a small amount?
Relatedly, I have an impression that people sometimes deny the value of quantitative estimates/forecasts in general based on seeming to view us as simply either “uncertain” or “certain” on a given matter (e.g., “we’ll still have no idea at all”). In contrast, I think we always have some but not complete uncertainty, and that we can often/always move closer to certainty by small increments.
That said, one can share that view of mine and yet think these estimates/forecasts (or any other particular thing) don’t help us move closer to certainty at all.
It seems to me that those takeaways are not things everyone is (viscerally) aware of, and that they’re things it’s valuable for people to be (viscerally) aware of. So it seems to me plausible that these seemingly disappointing takeaways actually indicate some value to these efforts. Does that sound right to you?
E.g., I wouldn’t be surprised if a large portion of people who don’t work at places like FHI wouldn’t realise that it’s hard to know how to even operationalise different types of AI risk, and would expect that people at FHI all agree pretty closely on some of these questions.
And I wouldn’t be super surprised if even some people who do work at places like FHI thought operationalisations would be relatively easy, agreement would be pretty high, etc. Though I don’t really know.
That said, there may be other, cheaper ways to spread those takeaways. E.g., perhaps, simply having a meeting where those points are discussed explicitly but qualitatively, and then releasing a statement on the matter.
Would you apply similar thinking to the question of how valuable existential risk estimates in particular are? I’d imagine so? Does this mean you see the database of existential risk estimates as of low or negative value?
I ask this question genuinely rather than defensively. I’m decently confident the database is net positive, but very uncertain about how positive, and open to the idea that it’s net negative.
Personally, I think it’s net positive but not worth the time investment in most cases. But based on feedback some other people think it’s net negative, at least when not executed exceptionally well—mostly due to anchoring, projecting a sense of false confidence, risk of numbers being quoted out of context etc.
I think an idealized survey would reduce uncertainty a bit. But in practice I think it’s too hard to tell the signal apart from the noise, and so that it basically doesn’t reduce object-level uncertainty at all. I’m more positive about the results providing some high-level takeaways (e.g. “people disagree a lot”) or identifying specific disagreements (e.g. “these two people disagree a lot on that specific question”).
Yes, that sounds right to me. I think it’s a bit tricky to get the message right though. I think I’d want to roughly convey a (more nuanced version of) “we still need people who can think through questions themselves and form their own views, not just people who seek guidance from some consensus which on many questions may not exist”. (Buck’s post on deference and inside-view models is somewhat related.) But it’s tricky to avoid pessimistic/non-constructive impressions like “people have no idea what they’re talking about, so we should stop giving any weight to them” or “we don’t know anything and so can’t do anything about improving the longterm future”.
I also do feel a bit torn about the implications myself. After all, the survey issues mostly indicate a failure of a specific way of making beliefs explicit, not necessarily a practical defect in those beliefs themselves. (Weird analogy: if you survey carpenters on weird questions about tables, maybe they also won’t give very useful replies, but they might still be great at building tables.) And especially if we’re pessimistic about the tractability of reducing confusion, then maybe advice along the lines of (e.g.) “try to do useful AI safety work even if you can’t give super clear justifications for what you’re doing and don’t fully understand the views of many of your peers” is among the best generic advice we can give, despite some remaining unease from people who are temperamentally maths/analytic philosopher types such as myself.
I think a database is valuable precisely because it shows a range of estimates, including the fact that different estimates sometimes diverge a lot.
Regarding existential risk estimates, I do see value in doing research on specific questions that would make us adjust those estimates, and then adjusting them accordingly. But this is probably not among the top criteria I’d use to pick research questions, and usually I’d expect most of the value to come from other sources (e.g. identifying potential interventions/solutions, field building or other indirect effects, …). The reason mostly is that I’m skeptical marginal research will change “consensus estimates” by enough that the change in the quantitative probability by itself will have practical consequences. E.g. I think it mostly doesn’t matter for practical purposes if you think the risk of extinction from AI this century is, say, 8% or 10% (making up numbers, not my beliefs). If I thought there was a research project that would cause most people to revise that estimate to, say, 0.1% I do think this would be super valuable. But I don’t think there is such a research project. (There are already both people whose credences are 0.1% and 10%, respectively, but the issue is they don’t fully understand each other, disagree about how to interpret the evidence etc. - and additional research wouldn’t significantly change this.)
Again, I do think there are various valuable research projects that would inform our views on how likely extinction from AI is, among other things. But I’d expect most of the value to come from things other than moving that specific credence.
In any case, all of these things are very different from asking someone who hasn’t done such research to fill in a survey. I think surveying more people on what their x-risk credences are will have ~zero or even negative epistemic value for the purpose of improving our x-risk estimates. Instead, we’d need to identify specific research questions, have people spend a long time doing the required research, and then ask those specific people. (So e.g. I think Ord’s estimate have positive epistemic value, and they also would if he stated them in a survey—the point is that this is because he has spent a lot of time deriving these specific estimates. But if you survey people, even longtermist researchers, most of them won’t have done such research, and even if they have lots of thoughts on relevant questions if you ask them to give a number they haven’t previously derived with great care they’ll essentially ‘make it up’.)
Thanks, that’s all really interesting.
I think I largely agree, except that I think I’m on the fence about the last paragraph.
I agree with what you say in this paragraph. But it seems somewhat separate to the question of how valuable it is to elicit and collate current views?
I think my views are roughly as follows:
“Most relevant experts are fairly confident that certain existential risks (e.g., from AI) as substantially more likely than others (e.g., from asteroids or gamma ray bursts). The vast majority of people—and a substantial portion of EAs, longtermists, policymakers, etc. - probably aren’t aware experts think that, and might guess that the difference in risk levels is less substantial, or be unable to guess which risks are most likely. (This seems analogous to the situation with large differences in charity cost-effectiveness.) Therefore, eliciting and collecting experts’ views can provide a useful input into other people’s prioritisation decisions.
That said, on the margin, it’ll be very hard to shift the relevant experts’ credences on x-risk levels by more than, for example, a factor of two. And there are often already larger differences in other factors in our decisions—e.g., tractability of or personal fit for interventions. In addition, we don’t know how much weight to put on experts’ specific credences anyway. So there’s not that much value in trying to further inform the relevant experts’ credences on x-risk levels. (Though the same work that would do that might be very valuable for other reasons, like helping those experts build more detailed models of how risks would occur and what the levers for intervention are.)”
Does that roughly match your views?
Just to check, I assume you mean that there’d be a lot of value in a research project that would cause most people to revise that estimate to (say) 0.1%, if indeed the best estimate is (say) 0.1%, and that wouldn’t cause such a revision otherwise?
One alternative thing you might mean: “I think the best estimate is 0.1%, and I think a research project that would cause most people to realise that would be super valuable.” But I’m guessing that’s not what you mean?
Yes, that sounds roughly right. I hadn’t thought about the value for communicating with broader audiences.
Yes, that’s what I meant.
(I think my own estimate is somewhere between 0.1% and 10% FWIW, but also feels quite unstable and like I don’t trust that number much.)