FYI, I disagree with the singularity hypothesis, but primarily due to epistemology, which isn’t even discussed in this article.
Error One
As low-hanging fruit is plucked, good ideas become harder to find (Bloom et al. 2020; Kortum 1997; Gordon 2016). Research productivity, understood as the amount of research input needed to produce a fixed output, falls with each subsequent discovery.
By way of illustration, the number of FDA-approved drugs per billion dollars of inflation-adjusted research expenditure decreased from over forty drugs per billion in the 1950s to less than one drug per billion in the 2000s (Scannell et al. 2012). And in the twenty years from 1971 to 1991, inflation-adjusted agricultural research expenditures in developed nations rose by over sixty percent, yet growth in crop yields per acre dropped by fifteen percent (Alston et al. 2000). The problem was not that researchers became lazy, poorly educated or overpaid. It was rather that good ideas became harder to find.
There are many other reasons for drug research progress to slow down. The healthcare industry, as well as science in general (see e.g. the replication crisis), are really broken, and some of the problems are newer. Also maybe they’re putting a bunch of work into updates to existing drugs instead of new drugs.
Similarly, decreasing crop yield growths (in other words, yields are still increasing but by lower percentages) could have many other causes. And also decreasing crop yields are a different thing than a decrease in the number of new agricultural ideas that researchers come up with – it’s not even the right quantity to measure to make his point. It’s a proxy for the actual thing his argument relies on, and he makes no attempt to consider how good or bad of a proxy it is, and I can easily think of some reasons it wouldn’t be a very good proxy.
The comment about researchers not becoming lazy, poorly educated or overpaid is an unargued assertion.
So these are bad arguments which shouldn’t convince us of the author’s conclusion.
Error Two
Could the problem of improving artificial agents be an exception to the rule of diminishing research productivity? That is unlikely.
Asserting something is unlikely isn’t an argument. His followup is to bring up Moore’s law potentially ending, not to give an actual argument.
As with the drug and agricultural research, his points are bad because singularity claims are not based on extrapolating patterns from current data, but rather on conceptual reasoning. He didn’t even claim his opponents were doing that in the section formulating their position, and my pre-existing understanding of their views is they use conceptual arguments not extrapolating from existing data/patterns (there is no existing data about AGI to extrapolate from, so they use speculative arguments, which is OK).
Error Three
one cause of diminishing research productivity is the difficulty of maintaining large knowledge stocks (Jones 2009), a problem at which artificial agents excel.
You can’t just assume that AGIs will be anything like current software including “AI” software like AlphaGo. You have to consider what an AGI would be like before you can even know if it’d be especially good at this or not. If the goal with AGI is in some sense to make a machine with human-like thinking, then maybe it will end up with some of the weaknesses of humans too. You can’t just assume it won’t. You have to envision what an AGI would be like, or what many different things it might be like that would work (narrow it down to various categories and rule some things out) before you consider the traits it’d have.
Put another way, in MIRI’s conception, wouldn’t mind design space include both AGIs that are good or bad at this particular category of task?
Error Four
It is an unalterable mathematical fact that an algorithm can run no more quickly than its slowest component. If nine-tenths of the component processes can be sped up, but the remaining processes cannot, then the algorithm can only be made ten times faster. This creates the opportunity for bottlenecks unless every single process can be sped up at once.
This is wrong due to “at once” at the end. It’d be fine without that. You could speed up up 9 out of 10 parts, then speed up the 10th part a minute later. You don’t have to speed everything up at once. I know it’s just two extra words but it doesn’t make sense when you stop and think about it, so I think it’s important. How did it seem to make sense to the author? What was he thinking? What process created this error? This is the kind of error that’s good to post mortem. (It doesn’t look like any sort of typo; I think it’s actually based on some sort of thought process about the topic.)
Error Five
Section 3.2 doesn’t even try to consider any specific type of research an AGI would be doing and claim that good ideas would get harder to find for that and thereby slow down singularity-relevant progress.
Similarly, section 3.3 doesn’t try to propose a specific bottleneck and explain how it’d get in the way of the singularity. He does bring up one specific type of algorithm – search – but doesn’t say why search speed would be a constraint on reaching the singularity. Whether exponential search speed progress is needed depends on specific models of how the hardware and/or software are improving and what they’re doing.
There’s also a general lack of acknowledgement of, or engagement with, counter-arguments that I can easily imagine pro-singularity people making (e.g. responding to the good ideas getting harder to find point by saying some stuff about mind design space containing plenty of minds that are powerful enough for a singularity with a discontinuity, even if progress slows down later as it approaches some fundamental limits). Similarly, maybe there is something super powerful in mind design space that doesn’t rely on super fast search. Whether there is, or not, seems hard to analyze, but this paper doesn’t even try. (The way I’d approach it myself is indirectly via epistemology first.)
Error Six
Section 2 mixes Formulating the singularity hypothesis (the section title) with other activities. This is confusing and biasing, because we don’t get to read about what the singularity hypothesis is without the author’s objections and dislikes mixed in. The section is also vague on some key points (mentioned in my screen recording) such as what an order of magnitude of intelligence is.
Examples:
Sustained exponential growth is a very strong growth assumption
Here he’s mixing explaining the other side’s view with setting it up to attack it (as requiring a super high evidential burden due to such strong claims). He’s not talking from the other side’s perspective, trying to present it how they would present it (positively); he’s instead focusing on highlighting traits he dislikes.
A number of commentators have raised doubts about the cogency of the concept of general intelligence (Nunn 2012; Prinz 2012), or the likelihood of artificial systems acquiring meaningful levels of general intelligence (Dreyfus 2012; Lucas 1964; Plotnitsky 2012). I have some sympathy for these worries.[4]
This isn’t formulating the singularity hypothesis. It’s about ways of opposing it.
These are strong claims, and they should require a correspondingly strong argument to ground them. In Section 3, I give five reasons to be skeptical of the singularity hypothesis’ growth claims.
Again this doesn’t fit the section it’s in.
Padding
Section 3 opens with some restatements of material from section 2 which was also in the introduction some. And look at this repetitiveness (my bolds):
Near the bottom of page 7 begins section 3.2:
3.2 Good ideas become harder to find
Below that we read:
As low-hanging fruit is plucked, good ideas become harder to find
Page 8 near the top:
It was rather that good ideas became harder to find.
Later in that paragraph:
As good ideas became harder to find
Also, page 11:
as time goes on ideas for further improvement will become harder to find.
Page 17
As time goes on ideas for further improvement will become harder to find.
Amount Read
I read to the end of section 3.3 then briefly skimmed the rest.
Screen Recording
I recorded my screen and made verbal comments while writing this:
One way the error 4 matters, besides what I said preemptively, is that it means none of the cites in the paper can be trusted without checking them.
FWIW I generally take this to be the case; unless I have strong prior evidence that someone’s citations are consistently to a high standard, I don’t assume their citations can be easily trusted, at least not for important things.
I don’t think the preemptive stuff you said is too important because I think people make mistakes all the time and I was more interested in the fundamental arguments outlined and evaluating them for myself.
FWIW I generally take this to be the case; unless I have strong prior evidence that someone’s citations are consistently to a high standard, I don’t assume their citations can be easily trusted, at least not for important things.
Thank you for following the game rules. You’re the only person out of four who did that.
BTW, I think that 25% rule-following rate is important evidence about the world, and rates much lower than 100% would be repeatable for many types of simple, clear rules that people voluntarily opt into. It’s a major concern for my debate policy proposals: you can put conditions on debates such as that people follow certain methodology, including regarding how to stop debating, and people can agree to those conditions … and then just break their word later (which has happened to me before).
Against the singularity hypothesis
Introduction
FYI, I disagree with the singularity hypothesis, but primarily due to epistemology, which isn’t even discussed in this article.
Error One
There are many other reasons for drug research progress to slow down. The healthcare industry, as well as science in general (see e.g. the replication crisis), are really broken, and some of the problems are newer. Also maybe they’re putting a bunch of work into updates to existing drugs instead of new drugs.
Similarly, decreasing crop yield growths (in other words, yields are still increasing but by lower percentages) could have many other causes. And also decreasing crop yields are a different thing than a decrease in the number of new agricultural ideas that researchers come up with – it’s not even the right quantity to measure to make his point. It’s a proxy for the actual thing his argument relies on, and he makes no attempt to consider how good or bad of a proxy it is, and I can easily think of some reasons it wouldn’t be a very good proxy.
The comment about researchers not becoming lazy, poorly educated or overpaid is an unargued assertion.
So these are bad arguments which shouldn’t convince us of the author’s conclusion.
Error Two
Asserting something is unlikely isn’t an argument. His followup is to bring up Moore’s law potentially ending, not to give an actual argument.
As with the drug and agricultural research, his points are bad because singularity claims are not based on extrapolating patterns from current data, but rather on conceptual reasoning. He didn’t even claim his opponents were doing that in the section formulating their position, and my pre-existing understanding of their views is they use conceptual arguments not extrapolating from existing data/patterns (there is no existing data about AGI to extrapolate from, so they use speculative arguments, which is OK).
Error Three
You can’t just assume that AGIs will be anything like current software including “AI” software like AlphaGo. You have to consider what an AGI would be like before you can even know if it’d be especially good at this or not. If the goal with AGI is in some sense to make a machine with human-like thinking, then maybe it will end up with some of the weaknesses of humans too. You can’t just assume it won’t. You have to envision what an AGI would be like, or what many different things it might be like that would work (narrow it down to various categories and rule some things out) before you consider the traits it’d have.
Put another way, in MIRI’s conception, wouldn’t mind design space include both AGIs that are good or bad at this particular category of task?
Error Four
This is wrong due to “at once” at the end. It’d be fine without that. You could speed up up 9 out of 10 parts, then speed up the 10th part a minute later. You don’t have to speed everything up at once. I know it’s just two extra words but it doesn’t make sense when you stop and think about it, so I think it’s important. How did it seem to make sense to the author? What was he thinking? What process created this error? This is the kind of error that’s good to post mortem. (It doesn’t look like any sort of typo; I think it’s actually based on some sort of thought process about the topic.)
Error Five
Section 3.2 doesn’t even try to consider any specific type of research an AGI would be doing and claim that good ideas would get harder to find for that and thereby slow down singularity-relevant progress.
Similarly, section 3.3 doesn’t try to propose a specific bottleneck and explain how it’d get in the way of the singularity. He does bring up one specific type of algorithm – search – but doesn’t say why search speed would be a constraint on reaching the singularity. Whether exponential search speed progress is needed depends on specific models of how the hardware and/or software are improving and what they’re doing.
There’s also a general lack of acknowledgement of, or engagement with, counter-arguments that I can easily imagine pro-singularity people making (e.g. responding to the good ideas getting harder to find point by saying some stuff about mind design space containing plenty of minds that are powerful enough for a singularity with a discontinuity, even if progress slows down later as it approaches some fundamental limits). Similarly, maybe there is something super powerful in mind design space that doesn’t rely on super fast search. Whether there is, or not, seems hard to analyze, but this paper doesn’t even try. (The way I’d approach it myself is indirectly via epistemology first.)
Error Six
Section 2 mixes Formulating the singularity hypothesis (the section title) with other activities. This is confusing and biasing, because we don’t get to read about what the singularity hypothesis is without the author’s objections and dislikes mixed in. The section is also vague on some key points (mentioned in my screen recording) such as what an order of magnitude of intelligence is.
Examples:
Here he’s mixing explaining the other side’s view with setting it up to attack it (as requiring a super high evidential burden due to such strong claims). He’s not talking from the other side’s perspective, trying to present it how they would present it (positively); he’s instead focusing on highlighting traits he dislikes.
This isn’t formulating the singularity hypothesis. It’s about ways of opposing it.
Again this doesn’t fit the section it’s in.
Padding
Section 3 opens with some restatements of material from section 2 which was also in the introduction some. And look at this repetitiveness (my bolds):
Near the bottom of page 7 begins section 3.2:
Below that we read:
Page 8 near the top:
Later in that paragraph:
Also, page 11:
Page 17
Amount Read
I read to the end of section 3.3 then briefly skimmed the rest.
Screen Recording
I recorded my screen and made verbal comments while writing this:
https://www.youtube.com/watch?v=T1Wu-086frA
Thanks!
I’m choosing not to debate.
If I’m reading your rules correctly, I’m still allowed to state if I consider some errors unimportant, with or without giving reasons.
I think error 4 is unimportant because the point is about bottlenecks and it stands without the last two words as you said.
If you’ve written anything against the singularity hypothesis, I would be curious to read it.
To be clear, you’re welcome to say whatever extra stuff you want.
Here is something https://curi.us/2478-super-fast-super-ais
One way the error 4 matters, besides what I said preemptively, is that it means none of the cites in the paper can be trusted without checking them.
FWIW I generally take this to be the case; unless I have strong prior evidence that someone’s citations are consistently to a high standard, I don’t assume their citations can be easily trusted, at least not for important things.
I don’t think the preemptive stuff you said is too important because I think people make mistakes all the time and I was more interested in the fundamental arguments outlined and evaluating them for myself.
Awesome. I think most people do not do that.
Thank you for following the game rules. You’re the only person out of four who did that.
BTW, I think that 25% rule-following rate is important evidence about the world, and rates much lower than 100% would be repeatable for many types of simple, clear rules that people voluntarily opt into. It’s a major concern for my debate policy proposals: you can put conditions on debates such as that people follow certain methodology, including regarding how to stop debating, and people can agree to those conditions … and then just break their word later (which has happened to me before).