Thanks, that makes sense. This is one aspect in which audience is an important factor. Our two recent nuclear war model papers (on the probability and impacts) were written to be accessible to wider audiences, including audiences less familiar with risk analysis. This is of course a factor for all research groups that work on topics of interest to multiple audiences, not just GCRI.
SethBaum
All good to know, thanks.
I’ll briefly note that I am currently working on a more extended discussion of policy outreach suitable for posting online, possibly on this site, that is oriented toward improving the understanding of people in the EA-LTF-GCR community. It’s not certain I’ll have the chance to complete given my other responsibilities it but hopefully I will.
Also if it would help I can provide suggestions of people at other organizations who can give perspectives on various aspects of GCRI’s work. We could follow up privately about that.
I actually had a sense that these broad overviews were significantly less valuable to me than some of the other GCRI papers that I’ve read and I predict that other people who have thought about global catastrophic risks for a while would feel the same.
That is interesting to hear. Some aspects of the overviews are of course going to be more familiar to domain experts. The integrated assessment paper in particular describes an agenda and is not intended to have much in the way of original conclusions.
The argument seemed to mostly consists of a few concrete examples, most of which seemed relatively tenuous to me. Happy to go into more depth on that).
I would be quite interested in further thoughts you have on this. I’ve actually found that the central ideas of the far future argument paper have held up quite well, possibly even better than I had originally expected. Ditto for the primary follow-up to this paper, “Reconciliation between factions focused on near-term and long-term artificial intelligence”, which is a deeper dive on this theme in the context of AI. Some examples of work that is in this spirit:
· Open Philanthropy Project’s grant for the new Georgetown CSET group, which pursues “opportunities to inform current and future policies that could affect long-term outcomes” (link)
· The study The Malicious Use of Artificial Intelligence, which, despite being led by FHI and CSER, is focused on near-term and sub-existential risks from AI
· The paper Bridging near- and long-term concerns about AI by Stephen Cave and Seán S. ÓhÉigeartaigh of CSER/CFI
All of these are more recent than the GCRI papers, though I don’t actually know how influential GCRI’s work was in any of the above. The Cave and ÓhÉigeartaigh paper is the only one that cites our work, and I know that some other people have independently reached the same conclusion about synergies between near-term and long-term AI. Even if GCRI’s work was not causative in these cases, these data points show that the underlying ideas have wider currency, and that GCRI may have been (probably was?) ahead of the curve.
One kind of bad operationalization might be “research that would give the best people at FHI, MIRI and Open Phil a concrete sense of being able to make better decisions in the GCR space”.
That’s fine, but note that those organizations have much larger budgets than GCRI. Of them, GCRI has closest ties to FHI. Indeed, two FHI researchers were co-authors on the long-term trajectories paper. Also, if GCRI was to be funded specifically for research to improve the decision-making of people at those organizations, then we would invest more in interacting with them, learning what they don’t know / are getting wrong, and focusing our work accordingly. I would be open to considering such funding, but that is not what we have been funded for, so our existing body of work may be oriented in an at least somewhat different direction.
It may also be worth noting that the long-term trajectories paper functioned as more of a consensus paper, and so I had to be more restrained with respect to bolder and more controversial claims. To me, the paper’s primary contributions are in showing broad consensus for the topic, integrating the many co-author’s perspectives into one narrative, breaking ground especially in the empirical analysis of long-term trajectories, and providing entry points for a wider range of researchers to contribute to the topic. Most of the existing literature is primarily theoretical/philosophical, but the empirical details are very important. (The paper also played a professional development role for me in that it gave me experience leading a massively-multi-authored paper.)
Given the consensus format of the paper, I was intrigued that the co-author group was able to support the (admittedly toned down) punch-line in the conclusion “contrary to some claims in the catastrophic risk literature, extinction risks may not be categorically more important than large subextinction risks”. A bolder/more controversial idea that I have a lot of affinity for is that the common emphasis on extinction risk is wrong, and that a wider—potentially much wider—set of risks merits comparable concern. Related to this is the idea that “existential risk” is either bad terminology or not the right thing to prioritize. I have not yet had the chance to develop these ideas exactly as I see them (largely due to lack of funding for it), but the long-term trajectories paper does cover a lot of the relevant ground.
(I have also not had the chance to do much to engage the wider range of researchers who could contribute to the topic, again due to lack of funding for it. These would mainly be researchers with expertise on important empirical details. That sort of follow-up is a thing that funding often goes toward, but we didn’t even have dedicated funding for the original paper, so we’ve instead focused on other work.)
Overall, the response to the long-term trajectories paper has been quite positive. Some public examples:
· The 2018 AI Alignment Literature Review and Charity Comparison, which wrote: “The scope is very broad but the analysis is still quite detailed; it reminds me of Superintelligence a bit. I think this paper has a strong claim to becoming the default reference for the topic.”
· A BBC article on the long-term future, which calls the paper “intriguing and readable” and then describes it in detail. The BBC also invited me to contribute an article on the topic for them, which turned into this.
I do view this publishing of the LTF-responses as part of an iterative process.
That makes sense. I might suggest making this clear to other applicants. It was not obvious to me.
Thanks, this is good to know.
Oliver Habryka’s comments raise some important issues, concerns, and ideas for future directions. I elaborate on these below. First, I would like to express my appreciation for his writing these comments and making them available for public discussion. Doing this on top of the reviews themselves strikes me as quite a lot of work, but also very valuable for advancing grant-making and activity on the long-term future.
My understanding of Oliver’s comments is that while he found GCRI’s research to be of a high intellectual quality, he did not have confidence that the research is having sufficient positive impact. There seem to be four issues at play: GCRI’s audience, the value of policy outreach on global catastrophic risk (GCR), the review of proposals on unfamiliar topics, and the extent to which GCRI’s research addresses fundamental issues in GCR.
(1) GCRI’s audience
I would certainly agree that it is important for research to have a positive impact on the issues at hand and not just be an intellectual exercise. To have an impact, it needs an audience.
Oliver’s stated impression is that GCRI’s audience is primarily policy makers, and not the EA long-term future (EA-LTF) community or global catastrophic risk (GCR) experts. I would agree that GCRI’s audience includes policy makers, but I would disagree that our audience does not include the EA-LTF community or GCR experts. I would add that our audience also includes scholars who work on topics adjacent to GCR and can make important contributions to GCR, as well as people in other relevant sectors, e.g. private companies working on AI. We try to prioritize our outreach to these audiences based on what will have the most positive impact on reducing GCR given our (unfortunately rather limited) resources and our need to also make progress on the research we are funded for. We very much welcome suggestions on how we can do this better.
The GCRI paper that Oliver described (“the paper that lists and analyzes all the nuclear weapon close-calls” is A Model for the Probability of Nuclear War. This paper is indeed framed for policy audiences, which was in part due to the specifications of the sponsor of this work (the Global Challenges Foundation) and in part because the policy audience is the most important audience for work on nuclear weapons. It is easy to see how reading that paper could suggest that policy makers are GCRI’s primary audience. Nonetheless, we did manage to embed some EA themes into the paper, such as the question of how much nuclear war should be prioritized relative to other issues. This is an example of us trying to stretch our limited resources in directions of relevance to wider audiences including EA.
Some other examples: Long-term trajectories of human civilization was largely written for audiences of EA-LTF, GCR experts, and scholars of adjacent topics. Global Catastrophes: The Most Extreme Risks was largely written for the professional risk analysis community. Reconciliation between factions focused on near-term and long-term artificial intelligence was largely written for… well, the title speaks for itself, and is a good example of GCRI engaging across multiple audiences.
The question of GCRI’s audience is a detail for which an iterative review process could have helped. Had GCRI known that our audience would be an important factor in the review, we could have spoken to this more clearly in our proposal. An iterative process would increase the workload, but perhaps in some cases it would be worth it.
(2) The value of policy outreach
Oliver writes, “I am broadly not super excited about reaching out to policy makers at this stage of the GCR community’s strategic understanding, and am confused enough about policy capacity-building that I feel uncomfortable making strong recommendations based on my models there.”
This is consistent with comments I’ve heard expressed by other people in the EA-LTF-GCR community, and some colleagues report hearing things like this too. The general trend has been that people within this community who are not active in policy outreach are much less comfortable with it than those who are. This makes sense, but it also is a problem that holds us back from having a larger positive impact on policy. This includes GCRI’s funding and the work that the funding supports, but it is definitely bigger than GCRI.
This is not the space for a lengthy discussion of policy outreach. For now, it suffices to note that there is considerable policy expertise within the EA-LTF-GCR community, including at GCRI and several other organizations. There are some legitimately tricky policy outreach issues, such as in drawing attention to certain aspects of risky technologies. Those of us who are active in policy outreach are very attentive to these issues. A lot of the outreach is more straightforward, and a nontrivial portion is actually rather mundane. Improving awareness about policy outreach within the EA-LTF-GCR community should be an ongoing project.
It is also worth distinguishing between policy outreach and policy research. Much of GCRI’s policy-oriented work is the latter. The research can and often does inform the outreach. Where there is uncertainty about what policy outreach to do, policy research is an appropriate investment. While I’m not quite sure what is meant by “this stage of the GCR community’s strategic understanding”, there’s a good chance that this understanding could be improved by research by groups like GCRI, if we were funded to do so.
(3) Reviewing proposals on unfamiliar topics
We should in general expect better results when proposals are reviewed by people who are knowledgeable of the domains covered in the proposals. Insofar as Oliver is not knowledgeable about policy outreach or other aspects of GCRI’s work, then arguably someone else should have reviewed GCRI’s proposal, or at least these aspects of GCRI’s proposal.
This makes me wonder if the Long-Term Future Fund may benefit from a more decentralized review process, possibly including some form of peer review. It seems like an enormous burden for the fund’s team to have to know all the nuances of all the projects and issue areas that they could be funding. I certainly would not want to do all that on my own. It is common for funding proposal evaluation to include peer review, especially in the sciences. Perhaps that could be a way for the fund’s team to lighten its load while bringing in a wider mix of perspectives and expertise. I know I would volunteer to review some proposals, and I’m confident at least some of my colleagues would too.
It may be worth noting that the sciences struggle to review interdisciplinary funding proposals. Studies report a perceived bias against interdisciplinary proposals: “peers tend to favor research belonging to their own field” (link), so work that cuts across fields is funded less. Some evidence supports this perception (link). GCRI’s work is highly interdisciplinary, and it is plausible that this creates a bias against us among funders. Ditto for other interdisciplinary projects. This is a problem because a lot of the most important work is cross-cutting and interdisciplinary.
(4) GCRI’s research on fundamental issues in GCR
As noted above, GCRI does work for a variety of audiences. Some of our work is not oriented toward fundamental issues in GCR. But here is some that is:
* Long-term trajectories of human civilization is on (among other things) the relative importance of extinction vs. sub-extinction risks.
* The far future argument for confronting catastrophic threats to humanity: Practical significance and alternatives is on strategy for how to reduce GCR in a world that is mostly not dedicated to reducing GCR.
* Towards an integrated assessment of global catastrophic risk outlines an agenda for identifying and evaluating the best ways of reducing the entirety of global catastrophic risk.
See also our pages on Cross-Risk Evaluation & Prioritization, Solutions & Strategy, and perhaps also Risk & Decision Analysis.
Oliver writes “I did not have a sense that they were trying to make conceptual progress on what I consider to be the current fundamental confusions around global catastrophic risk, which I think are more centered around a set of broad strategic questions and a set of technical problems.” He can speak for himself on what he sees the fundamental confusions as being, but I find it hard to conclude that GCRI’s work is not substantially oriented toward fundamental issues in GCR.
I will note that GCRI has always wanted to focus primarily on the big cross-cutting GCR issues, but we have never gotten significant funding for it. Instead, our funding has gone almost exclusively to more narrow work on specific risks. That is important work too, and we are grateful for the funding, but I think a case can be made for more support for cross-cutting work on the big issues. We still find ways to do some work on the big issues, but our funding reality prevents us from doing much.
Thanks for this conversation. Here are a few comments.
Regarding the Ukraine crisis and the current NATO-Russia situation, I think Max Fisher at Vox is right to raise the issue as he has, with an excellent mix of insider perspectives. There should be more effort like this, in particular to understand Russia’s viewpoint. For more on this topic I recommend recent work by Rajan Menon [http://nationalinterest.org/feature/newsflash-america-ukraine-cannot-afford-war-russia-13137], [http://nationalinterest.org/feature/avoiding-new-cuban-missile-crisis-ukraine-12947], [http://www.amazon.com/Conflict-Ukraine-Unwinding-Post-Cold-Originals/dp/0262029049] and Martin Hellman’s blog [https://nuclearrisk.wordpress.com]. I do think Fisher somewhat overstates the risk by understating the possibility of a “frozen conflict”—see Daniel Drezner’s discussion of this [http://www.washingtonpost.com/posteverything/wp/2015/07/01/the-perils-of-putins-grim-trigger]. That said, the Ukraine crisis clearly increases the probability of nuclear war, though I think it also increases the prospects and opportunities for resolving major international tensions by drawing them to attention [http://www.huffingtonpost.com/seth-baum/best-and-worst-case-scena_b_4915315.html]. Never let a good crisis go to waste.
Regarding the merits of the EA community working on nuclear war risk, I think it’s worth pursuing. Yes, the existence of an established nuclear weapons community means there is more supply of work on this topic, but there is also more demand, especially more high-level demand. I see a favorable supply-demand balance, which is a core reason why GCRI has done a lot on this topic. (We also happen to have relevant background and connections.) Of note, the established community has less inclination towards quantitative risk analysis, and also often takes partisan nationalistic or ideological perspectives; people with EA backgrounds can make valuable contributions on both fronts. My big piece of advice for EAs seeking to get involved is to immerse yourself in the nuclear weapons community to understand its concepts, perspectives, etc., and to respect all that it has already accomplished, instead of showing up expecting to immediately teach them things they didn’t know already. This is comparable to the situation with foreign aid projects that don’t bother to see what local communities actually benefit from.
I see the logic here, but I would hesitate to treat it as universally applicable. Under some circumstances, more centralized structrues can outperform. For example if China or Wal-Mart decide to reduce greenhouse gas emissions, then you can get a lot more than if the US or the corner store decide to, because the latter are more decentralized. That’s for avoiding catastrophes. For surviving them, sometimes you can get similar effects. However, local self-sufficiency can be really important. We argued this in http://sethbaum.com/ac/2013_AdaptationRecovery.html. As for anti-trust, perhaps this could help, but this doesn’t strike me as the right place to start. It seems like a difficult area to make progress on relative to the potential gains in terms of gcr reduction. But I could be wrong, as I’ve not looked into it in any detail.
OK, I’m wrapping up for the evening. Thank you all for these great questions and discussion. And thanks again to Ryan Carey for organizing.
I’ll check back in tomorrow morning and try to answer any new questions that show up.
For what it’s worth, I became a (bad) vegan/vegetarian because at its worst, industrial animal husbandry seems to do some truly terrible things. And sorting out the provenance of animal products is just a major PITA, fraught with all sorts of uncertainly and awkward social moments, such as being the doof at the restaurant who needs to ask five different questions about where/how/when the cow got turned into the steak. It’s just easier for me to order the salad.
I mainly eat veg foods too. It reduces environmental problems, which helps on gcr/xrisk. And it’s good for livestock welfare, which is still a good thing to help on. And it lowers global food prices, which is good for global poverty. And apparently it’s also healthy.
My interest in x-risk comes from wanting to work on big/serious problems. I can’t think of a bigger one than x-risk.
Yeah, same here. I think the most difficult ethical issue with gcr/xrisk is the idea that other, smaller issues don’t matter so much. It’s like we don’t care about the poor or something like that. What I say here is that no, it’s precisely because we do care about the poor, and everyone else, that it’s so important to reduce these risks. Because unless we avoid catastrophe, nothing else really matters. All that work on all those other issues would be for nothing.
I took an honors BA which included a pretty healthy dose of post-structuralist inflected literary theory, along with math and fine arts. I did a masters in architecture, worked in that field for a time, then as a ‘creative technologist’ and now I’m very happy as a programmer, trying to learn as much math as I can in my free time.
Very interesting!
It looks like a good part of the conversation is starting to revolve around influencing policy. I think there’s some big macro social/cultural forces that have been pushing people to be apolitical for a while now. The most interesting reform effort I’ve heard about lately is Lawrence Lessig’s anti-PAC in the US. How can we effectively level our political games up?
I agree there are macro factors pushing people away from policy. However, that can actually increase the effectiveness of policy engagement: less competition.
A great way to level up in politics is to get involved in local politics. Local politics is seriously underrated. It is not terribly hard to actually change actual policies. And you make connections that can help you build towards higher levels.
For gcr, a good one is urban planning to reduce greenhouse gas emissions. I’m biased here, because I’m an urban planning junkie, but there’s always loads of opportunity. Here in NYC I have my eye on a new zoning policy change. It’s framed in terms of afforable housing, not global warming, but the effect is the same. See http://www.vox.com/2015/2/21/8080237/nyc-zoning-reform.
Total mixed bag of questions, feel free to answer any/all. Apologies if you’ve already written on the subject elsewhere; feel free to just link if so.
No worries.
What is your current marginal project(s)? How much will they cost, and what’s the expected output (if they get funded)
We’re currently fundraising in particular for integrated assessment, http://gcrinstitute.org/integrated-assessment. Most institutional funders have programs on only one risk at a time. We’re patching integrated assessment work from other projects, but hope to get more dedicated integrated assessment funding. Something up to around $1M/yr would probably suit us well for now, but this is significantly higher than what we currently have, and every dollar helps.
What is the biggest mistake you’ve made?
This is actually an easy one, since we just finished shifting our focus. The biggest mistake we made was letting ourselves get caught up on an ad hoc, unfocused mix of projects, instead of prioritizing better. The integrated assessment is now our core means of prioritizing. See more at http://gcrinstitute.org/february-newsletter-new-directions-for-gcri.
What is the biggest mistake you think others make?
Well, most people make the mistake of not focusing mainly on gcr reduction. Within the gcr community, I think the biggest mistake is not focusing on how best to reduce the risks. Instead a lot of people focus on the risks themselves.
What do you think about the costs and benefits of publishing in journals as strategy?
We publish mainly in academic journals. It takes significant extra effort and introduces delays, but it almost always improves the quality of the final product, it attracts a wider audience, it can be used more widely, and it has significant reputation benefits. But we make heavy use of our academic careers and credentials. It’s not for everyone, and that’s OK.
Do you think the world has become better or worse over time? How? Why?
It’s become better and worse. Population, per capita quality of life, and values seem to be improving. But risks are piling up.
Do you think the world has become more or less at risk over time? How? Why?
More, due mainly to technological and environmental change. Opportunities are also increasing. The opportunities are all around us (for example, the internet), but the risks can be so enormous.
What you think about Value Drift?
Define?
What do you think will be the impact of the Elon Musk money?
It depends on what proposals they get, but I’m cautiously optimistic that this will really help develop a culture of responsibility and safety among AI researchers. More so because it’s not just money—FLI and others are actively nurturing relationships.
How do you think about weighing future value vs current value?
All units of intrinsic value should be weighted equally regardless of location in time or space. (Intrinsic value: see http://sethbaum.com/ac/2012_Value-CBA.html.)
What do you think about population growth/stagnation?
I don’t get too worried about it.
Why did you found a new institute rather than joining an existing one?
Because Tony Barrett and I didn’t see any existing institutes capable of working on gcr they way we thought it should be done, in particular working across all the risks with rigorous risk analysis & risk management methodology.
Are there any GCRs you are worried about that would not involve a high deathcount?
Totalitarianism is one. Another plausible one is toxic chemicals, but this might not be big enough to merit that level of concern. On toxics, see http://sethbaum.com/ac/2014_Rev-Grandjean.pdf.
What’s your probability distribution for GCR timescale?
I’m not sure what you mean by that, but at any rate, I don’t have confident estimates for specific probabilities.
Personal question, feel free to disregard, but this is an AMA: How has concern about GCR’s affected your personal life, beyond the obvious. Has it affected your retirement savings? Do you plan / already have children?
It hasn’t affected things like retirement or children. Maybe it should, but it hasn’t. The bigger factor is not gcr per se but fanatacism towards helping others. I push myself pretty hard, but I would probably be doing the same if I was focusing on, say, global poverty or animal welfare instead of gcr.
One of the major obstacles to combating Global Warming at the governmental level in America is the large financial investment that the fossil fuel industry makes to politicians in return for tens of billions of dollars in government assistance every year (widely varied numbers depending on how one calculates the incentives and tax breaks and money for research and so on). There seems to me to be only one way to change the current corrupt money for control of politicians process, and that is to demand that all political donations be made anonymously, given to the government who then deposits it in the political party or candidates’ account in a way that hides the identity of the donor from the recipient. This way the donor still has their “speech” and yet cannot wield undue influence on the politician. Most likely many such “donations” will stop as the corrupt people making them will understand that they can simply claim to have given and keep their money. What do you think of this idea? Why would it not work? How do we get it done?
First, I agree that a key to addressing global warming is to address the entrenched financial interests that have been opposing it. So you’re zooming in on at least one of the most important parts of it.
Your idea makes sense, at least at first glance. I don’t have a good sense for how politically feasible it is, but I’m afraid I’m skeptical. Any change to the structure of the political system that reduces large influences is likely to be fought by those influences. But I would not discourage you from looking into it further and giving it a try.
oops I think I answered this question up above. I think this is the link: http://effective-altruism.com/ea/fv/i_am_seth_baum_ama/2v9
What funding will GCRI require over the coming year to maintain these activities?
GCRI has a small base of ongoing funding that keeps the doors open, so to speak, except that we don’t have any actual doors. I will say, not having an office space really lowers costs!
The important thing is that GCRI is in an excellent place to convert additional funding into additional productivity, mainly by freeing up additional person-hours of work.
Then I guess you don’t think it’s plausible that we can’t expect to make many permanent gains. Why?
I’ll have to look at that link later, but briefly, I do think it can be possible to make some permanent gains, but there seem to be significantly more opportunities to avoid permanent losses. That said, I do not wish to dismiss the possibility of permanent gains, and am very much willing to consider them as of potential comparable significance.
Here’s one question: which risks are you most concerned about?
I shy away from ranking risks, for several reasons:
The risks are often interrelated in important ways. For example, we analyzed a scenario in which geoengineering catastrophe was caused by some other catastrophe: http://sethbaum.com/ac/2013_DoubleCatastrophe.html. This weekend Max Tegmark was discussing how AI can affect nuclear war risk if AI is used for nuclear weapons command & control. So they’re not really distinct risks.
Ultimately what’s important to rank is not the risks themselves, but the actions we can take to reduce them. We may sometimes have better opportunities to reduce smaller risks. For example, maybe some astronomers should work on asteroid risks even though this is a relatively low probability risk.
Also, the answer to this question varies by time period. For, say, the next 12 months, nuclear war and pandemics are probably the biggest risks. For the next 50-100 years, we need to worry about these plus a mix of environmental and technological risks.
And who do you think has the power to reduce those risks?
There’s the classic Margaret Mead quote, “Never underestimate the power of a small group of committed people to change the world. In fact, it is the only thing that ever has.” There’s a lot of truth to this, and I think the EA community is well on its way to being another case in point. That is as long as you don’t slack off! :)
That said, I keep an eye on a mix of politicians, other government officials, researchers, activists, celebrities, journalists, philanthropists, entrepreneurs, and probably a few others. They all play significant roles and it’s good to be able to work with all of them.
- Mar 4, 2015, 1:37 AM; 0 points) 's comment on I am Seth Baum, AMA! by (
What are GCRI’s current plans or thinking around reducing synthetic biology risk? Frighteningly, there seems to be underinvestment in this area.
We have an active synbio project modeling the risk and characterizing risk reduction opportunities, sponsored by the US Dept of Homeland Security: http://gcrinstitute.org/dhs-emerging-technologies-project.
I agree that synbio is an under-invested-in area across the gcr community. Ditto for other bio risks. GCRI is working to correct that, as is CSER.
Also, with regard to the research project on altruism, my shoot-from-the-hip intuition is that you’ll find somewhat different paths into effective altruism than other altruistic activities. Many folks I know now involved in EA were convinced by philosophical arguments from people like Peter Singer. I believe Tom Ash (tog.ash@gmail.com) embedded Qs about EA genesis stories in the census he and a few others conducted.
Thanks! Very helpful.
As for more general altruistic involvement, one promising body of work is on the role social groups play. Based on some of the research I did for Reducetarian message-framing, it seems like the best predictor of whether someone becomes a vegetarian is whether their friends also engage in vegetarianism (this accounts for more of the variance than self-reported interest in animal welfare or health benefits). The same was true of the civil right movement: the best predictor of whether students went down South to sign African Americans up to vote was whether they were part of a group that participated in this very activity.
Thanks again! I recall seeing data indicating that health was the #1 reason for becoming vegetarian, but I haven’t looked into this closely so I wouldn’t dispute your findings.
Buzz words here to aid in the search: Social proof Peer pressure Normative social influence Conformity Social contagion
Literature to look into: - Sandy Pentland’s “social physics” work: http://socialphysics.media.mit.edu/papers/ - Chapter 4 (“Social proof”) of Cialdini’s Influence: Science and Practice: http://www.amazon.com/Influence-Science-Practice-5th-Edition/dp/0205609996 - McKenzie-Mohr’s book on Community–Based Social Marketing: http://www.cbsm.com/pages/guide/preface/
Thanks!
My commendations on another detailed and thoughtful review. A few reactions (my views, not GCRI’s):
Actually, a lot of scientists & engineers in nuclear power are not happy about the strict regulations on nuclear power. Note, I’ve been exposed to this because my father worked as an engineer in the nuclear power industry, and I’ve had other interactions with it through my career in climate change & risk analysis. Basically, widespread overestimation of the medical harms from radiation has caused nuclear power to be held to a much higher standard than other sources, especially fossil fuels.
A better example would be recombinant DNA—see Katja Grace’s very nice study of it. The key point is the importance of the scientists/engineers buying into the regulation. This is consistent with other work I’m familiar with on risk regulation etc., and with work I’ve published, e.g. this and this.
More precisely, the distinction is between issues that matter to voters in elections (plus campaign donors etc.) and issues that fly more under the radar. For now at least, AI still flies under the radar, creating more opportunity for expert insiders (like us) to have significant impact, as do most other global catastrophic risks. The big exception is climate change. (I’m speaking in terms of US politics/policy. I don’t know about other countries.)
This depends on the policy. A lot of policy is not about restricting AI, but instead about coordination, harmonizing standards, ensuring quality applications, setting directions for the field, etc. That said, it is definitely important to factor the reactions of AI communities into policy outreach efforts. (As I have been pushing for in e.g. the work referenced above.)
It varies from case to case. For a lot of research, the primary audience is other researchers/experts in the field. They generally have access to paywall journals and place significant weight on journal quality/prestige. Also open access journals typically charge author publication fees, generally in the range of hundreds to thousands of dollars. That raises the question of whether it’s a good use of funds. I’m not at all against open access (I like open access!); I only mean to note that there are other factors that may make it not always the best option.
Again it depends. Mass-market books typically get a lot more attention when they’re from a major publisher. These books are more than just books—they are platforms for a lot of attention and discussion. If e.g. Bostrom had self-published Superintelligence, it probably wouldn’t have gotten nearly the same attention. Also good publishers have editors who improve the books, and that costs money. I see a stronger case for self-publishing technical reports that have a narrower audience, especially if the author and/or their organization have the resources to do editing, page layout, promotion, etc.
Yes, definitely! I for one frequent the websites of peer organizations, and often wish they were more up to date.
I might worry that this could bias the field away from more senior people who may have larger financial responsibilities (family, mortgage, etc.) and better alternative opportunities for income. There’s no guarantee that future donations will be made, which creates a risk for the worker even if they’re doing excellent work.
Peer review should filter out bad/unoriginal research, sort it by topic (journal X publishes on topic X etc.), and improve papers via revision requests. Good journals do this. Not all journals are good. Overall I for one find significantly better quality work in peer reviewed journals (especially good journals) than outside of peer review.
I can’t speak to concerns about the Bay Area, but I can say that GCRI has found a lot of value in connecting with people outside the usual geographic hubs, and that this is something ripe for further investment in (whether via GCRI or other entities). See e.g. this on GCRI’s 2019 advising/collaboration program, which we’re continuing in 2020.