I think this line of reasoning may be misguided, at least if taken in a particular direction. If the AI Safety community loudly talks about there being a significant chance of AGI within 10 years, then this will hurt the AI Safety community’s reputation when 10 years later we’re not even close. It’s important that we don’t come off as alarmists. I’d also imagine that the argument “1% is still significant enough to warrant focus” won’t resonate with a lot of people. If we really think the chances in the next 10 years are quite small, I think we’re better off (at least for PR reasons) talking about how there’s a significant chance of AGI in 20-30 years (or whatever we think), and how solving the problem of safety might take that long, so we should start today.
I think you’re right about AGI being very unlikely within the next 10 years. I would note, though, that the OpenPhil piece you linked to predicted at least 10% chance within 20 years, not 10 years (and I expect many people predicting “short timelines” would consider 20 years to be “short”). If you grant 1-2% chance to AGI in 10 years, perhaps that translates to 5-10% within 20 years.
Similarly, the word “majority” is used in a couple places where it should have instead said “plurality.” (Sorry to be nitpicky)
I think you’re understating the importance of taking up the resources. There aren’t THAT many super high quality medical researchers who can credibly signal their high quality.
Are women more likely to return for a second event if the gender ratio of the first event they attended was more balanced? This could tell you whether the difference is simply a result of the community being mostly male right now, or if it’s due to some other reason(s).
One easy way you could get a sample that’s both broadly representative and also weights more involved EAs more is to make the survey available to everyone on the forum, but to weight all responses by the square root of the respondent’s karma. Karma is obviously an imperfect proxy, but it seems much easier to get than people’s donation histories, and it doesn’t seem biased in any particular direction. The square root is so that the few people with the absolute highest karma don’t completely dominate the survey.
“I’d compiled a list of 40-odd evidence-based activities and re-thinking exercises, i.e. behavioural and cognitive interventions, that I’d come across during my research”
Have you made this list public anywhere? I’d be interested in seeing the list (and I assume others would be too).
So let’s assume that teams of superforecasters with extremized predictions can do significantly better than any other mechanism of prediction that we’ve thought of, including prediction markets as they’ve existed so far. If so, then with prediction markets of sufficiently high volume and liquidity (just for the sake of argument, imagine prediction markets on the scale of the NYSE today), we would expect firms to crop up that would identify superforecasters, train them, and optimize for exactly how much to extremize their predictions (as well as iterating on this basic formula). These superforecaster firms would come to dominate the prediction markets (we’d eventually wind up with companies that were like the equivalent of goldman sachs but for prediction markets), and the prediction markets would be better than any other method of prediction. Of course, we’re a LONG way away from having prediction markets like that, but I think this at least shows the theoretical potential of large scale prediction markets.
I thought this piece was good. I agree that MCE work is likely quite high impact—perhaps around the same level as X-risk work—and that it has been generally ignored by EAs. I also agree that it would be good for there to be more MCE work going forward. Here’s my 2 cents:
You seem to be saying that AIA is a technical problem and MCE is a social problem. While I think there is something to this, I think there are very important technical and social sides to both of these. Much of the work related to AIA so far has been about raising awareness about the problem (eg the book Superintelligence), and this is more a social solution than a technical one. Also, avoiding a technological race for AGI seems important for AIA, and this also is more a social problem than a technical one.
For MCE, the 2 best things I can imagine (that I think are plausible) are both technical in nature. First, I expect clean meat will lead to the moral circle expanding more to animals. I really don’t see any vegan social movement succeeding in ending factory farming anywhere near as much as I expect clean meat to. Second, I’d imagine that a mature science of consciousness would increase MCE significantly. Many people don’t think animals are conscious, and almost no one thinks anything besides animals can be conscious. How would we even know if an AI was conscious, and if so, if it was experiencing joy or suffering? The only way would be if we develop theories of consciousness that we have high confidence in. But right now we’re very limited in studying consciousness, because our tools at interfacing with the brain are crude. Advanced neurotechnologies could change that—they could allow us to potentially test hypotheses about consciousness. Again, developing these technologies would be a technical problem.
Of course, these are just the first ideas that come into my mind, and there very well may be social solutions that could do more than the technical solutions I mentioned, but I don’t think we should rule out the potential role of technical solutions, either.
As long as we’re talking about medical research from an EA perspective, I think we should consider funding therapies for reversing aging itself. In terms of scale, aging undoubtedly is by far the largest (100,000 people die from age-related diseases every single day, not to mention the psychological toll that aging causes). Aging is also quite neglected—very few researchers focus on trying to reverse it. Tractability is of course a concern here, but I think this point is a bit nuanced. Achieving a full and total cure for aging would clearly be quite hard. But what about a partial cure? What about a therapy that made 70 year olds feel and act like they were 50, and with an additional 20 years of life expectancy? Such a treatment may be much more tractable. At least a large part of aging seems to be due to several common mechanisms (such as DNA damage, accumulation of senescent cells, etc), and reversing some of these mechanisms (such as by restoring DNA, clearing the body of senescent cells, etc) might allow for such a treatment. Even the journal Nature (one of the 2 most prestigious science journals in the world) had a recent piece saying as much:
If anyone is interesting in funding research toward curing aging, the SENS Foundation (http://www.sens.org) is arguably your best bet.
“the community members who agree with this reasoning, have moved on to other problem areas”
I’ve seen this problem come up with other areas as well. For instance, funding research to combat aging (eg the SENS foundation) gets little support, because basically anyone who will “shut up and multiply”—coming to the conclusion that SENS is higher EV than GiveWell charities, will use the same logic to conclude that AI safety is higher EV than GiveWell charities or SENS.
I really like this type of reasoning—I think it allows for easier comparisons than the standard expected value assessments people have occasionally tried to do for systemic changes. A couple points, though.
1) I think very few systemic changes will affect 1B people. Typically I assume a campaign will be focussed on a particular country, and likely only a portion of the population of that country would be positively affected by change—meaning 10M or 100M people is probably much more typical. This shifts the cutoff cost to closer to around $1B to $10B, which seem plausibly in the same ballpark as GD.
2) Instead of asking “how much would this campaign cost to definitely succeed”, you could ask “how much would it cost to run a campaign that had at least a 50% chance of succeeding” and then divide the HALYS by 2. I’d imagine this is a much easier question to answer, as you’d never be certain that an effort at systemic change would be successful, but you could become confident that the chances were high.
It seems like a lot of these are for funding particular researchers. I don’t know of a way to do this in a tax-deductible manner. I think it would be good if someone created an organization that got tax exempt status and allowed for people to donate to them and specify specific researchers they wanted the donation to go towards.
Yeah, I was referring to the accessible universe, though I guess you are right that I can’t even be 100% certain that our theories on that won’t be overturned at some point.
Thanks for taking the time to write this post. I have a few comments—some supportive, and some in disagreement with what you wrote.
I find your worries about Peak Oil to be unsupported.
In the last several years, the US has found tons of natural gas that it can access—perhaps even 100 years or more. On top of this, renewables are finally starting to really prove their worth—with both wind and solar reaching new heights. Solar in particular has improved drastically—exponential decay in cost over decades (with cost finally reaching parity with fossil fuels in many parts of the world), exponential increase in installations, etc. If fossil fuels really were running out that would arguably be a good thing—as it would increase the price of fossil fuels and make the transition to solar even quicker (and we’d have a better chance of avoiding the worst effects of climate change). Unfortunately, the opposite seems more likely—as ice in the arctic melts, more fossil fuels (that are now currently under the ice) will become accessible.
I think “The Limits of Growth” is not a particularly useful guide to our situation.
This report might have been a reasonable thing to worry about in 1972, but I think a lot has changed since then that we need to take into account. First off, yes, obviously exponential growth with finite resources will eventually hit a wall, and obviously the universe is finite. But the truth is that while there are limits—we’re not even remotely close to these limits. There are several specific technological trends in that each seem likely to turn LTG type thinking about limits in the near term on their head, including clean energy, AI, nanotechnology, and biotechnology. We are so far from the limits of these technologies—yet even modest improvements will let us surpass the limits of the world today. Regarding the fact that the 1970-2000 data fits with the predictions of LTG—this point is just silly. LTG’s prediction can be roughly summarized as “the status quo continues with things going good until around 2020 to 2030, and then stuff starts going terribly.” The controversial claim isn’t the first part about stuff continuing to go well for a while, but the second part about stuff then going terribly. The fact that we’ve continued to do well (as their model predicted!) doesn’t mean that the second part of their model will go as predicted and things will follow by going terribly.
I have no idea how plausible a Malthusian disaster in Sub-Saharan Africa is.
I know that climate change has the potential to cause massive famines and mass migrations—and I agree that has the potential to increase right wing extremists in Europe (and that this would all be terrible). I don’t know what the projected timeframe on that is, though. I also hadn’t heard of most of the other problems you listed in this section. Unfortunately, after reading your section on peak oil which struck me as both unsubstantiated (I mean no offense by this—just being straightforward) and also somewhat biased (for instance I can sense some resentment of “elites” in your writing, among other things), I now don’t know how much faith to have in your analysis of the Sub-Saharan African situation (which I feel much less qualified to judge than the other section).
I agree it is good for people to be thinking about these sorts of things, and I would encourage more research into the area. Also, I hadn’t heard of Transafrican Water pipeline Project, and agree that it would make sense for EAs to evaluate it for whether it would be an effective use of charitable donations.
Nanotechnology is technology that has parts operating in the range of between 1 nm and 100 nm, so actually this technology is nanotechnology—as is much of the rest of biotechnology.
You’re right that the usefulness of non-biotech based nanotechnology (what people typically think of as nanotechnology) hasn’t been used much—that’s largely due to it being a nascent area. I expect that to change over the coming decades as the technology improves. It might not, though, as biotech based nanotechnology might stay in the lead.
Broadly speaking, nanoparticles (or nanorobots, depending on how complicated they are) that scan the brain from the inside, in vivo. The sort of capabilities I’m imagining is the ability to monitor every neuron in large neural circuits simultaneously, each for many different chemical signals (such as certain neurotransmitters). Of course, since this technology doesn’t exist yet, the specifics are necessarily uncertain—these probes might include CMOS circuitry, they might be based on DNA origami, or they might be unlike any technology that currently exists. Such probes would allow for building much more accurate maps of brain activity.
Neuroprosthesis-driven uploading seems vastly harder for several reasons:
• you’d still need to understand in great detail how the brain processes information (if you don’t, you’ll be left with an upload that, while perhaps intelligent, would not act like how the person acted, and perhaps even drastically so that it might be better to imagine it as a form of NAGI than as WBE)
• integrating the exocortex with the brain would likely still require nanotechnology able to interface with the brain
• ethical/ regulatory hurdles here seem immense
I’d actually expect that in order to understand the brain enough for neuroprosthesis-driven uploading, we’d still likely need to run experiments with nanoprobes (for the same arguments as in the paper: lots of the information processing happens on the sub-cellular level—this doesn’t mean that we have to replicate this information processing in a biologically realistic manner, but we likely will need to at least understand how the information is processed)
Also here’s a 5 minute talk I gave at EA Global London on the same topic: