I’ve also now put code for Appendix C.3 on GitHub.
Muireall
In fact, algorithmic progress has been found to be similarly as important as compute for explaining progress across a variety of different domains, such as Mixed-Integer Linear Programming, SAT solvers, and chess engines—an interesting coincidence that can help shed light on the source of algorithmic progress (Koch et al. 2022, Grace 2013). From a theoretical perspective, there appear to be at least three main explanations of where algorithmic progress ultimately comes from:
Theoretical insights, which can be quickly adopted to improve performance.
Insights whose adoption is enabled by scale, which only occurs after there’s sufficient hardware progress. This could be because some algorithms don’t work well on slower hardware, and only start working well once they’re scaled up to a sufficient level, after which they can be widely adopted.
Experimentation in new algorithms. For example, it could be that efficiently testing out all the reasonable choices for new potential algorithms requires a lot of compute.
I always feel kind of uneasy about how the term “algorithmic progress” is used. If you find an algorithm with better asymptotics, then apparent progress depends explicitly on the problem size. MILP seems like a nice benchmark because it’s NP-hard in general, but then again most(?) improvements are exploiting structure of special classes of problems. Is that general progress?
One important factor affecting our ability to measure algorithmic progress is the degree to which algorithmic progress on one task generalizes to other tasks. So far, much of our data on algorithmic progress in machine learning has been on ImageNet. However, there seem to be two ways of making algorithms more efficient on ImageNet. The first way is to invent more efficient learning algorithms that apply to general tasks. The second method is to develop task-specific methods that only narrowly produce progress on ImageNet.
We care more about the rate of general algorithmic progress, which in theory will be overestimated by measuring the rate of algorithmic progress on any specific narrow task. This consideration highlights one reason to think that estimates overstate algorithmic progress in a general sense.
I definitely agree with the last sentence, but I’m still not sure how to think about this. I have the impression that, typically, some generalizable method makes a problem feasible, at which point focused attention on applying related methods to that problem drives solving it towards being economical. I suppose for this framework we’d still care more about the generalizable methods, because those trigger the starting gun for each automatable task?
Considerations on transformative AI and explosive growth from a semiconductor-industry perspective
I also don’t want to turn this into “and now Aella hijacks this issue to talk about personal social drama where she picks through the details of every rumor in order to convince you she’s not a terrible person”. But still, it’s hard to convey exactly how this works without examples, and specific instances of this happening to me are what have so strongly updated my views, so I’m going to pick just a few.
I appreciate your taking care here, and concreteness is valuable in conversations like this. Even so, I’m not sure what I can take away with respect to EA from this middle half of your post. I don’t know you apart from your two prior comments on this forum. It’s hard to distinguish your account here from what I’d expect to read from a verbally skilled actor who’s fluent in the strategic use of bright lines to avoid getting pinned down. Narrative may be the best remaining means to warn against such an actor, and by construction that narrative can just as well be framed as mere insinuation or hostile framing. So it’s hard to treat statements like yours as evidence without a foundation of trust in involved individuals or even just familiarity with their community. Separately, the specifics of your analysis of statements about you seem to me at best tenuously related to coverage of EA, even in your transition back to discussing it.
I’m left feeling that this wasn’t an appropriate forum for that section of your post. (I don’t object to your defending yourself—I doubt I’d have commented if you’d left this section at the link to your original response to your friend’s statements. Similarly, I’m happy to encourage critical reading.) I seem to be in the minority—you do say you’re “mini-famous”, so maybe more in-the-loop readers have more to go on—but I do wonder if others have thoughts along or outright against these lines.
named the “Gell-Mann Amnesia Effect” by Richard Feynman
Aside, but that was Michael Crichton.
Differential response within the survey is again as bad.
The response rate for the survey as a whole was about 20% (265 of 1345), and below 8% (102) for every individual question on which data was published across three papers (on international differences, the Flynn effect, and controversial issues).
Respondents attributed the heritability of U.S. black-white differences in IQ 47% on average to genetic factors. On similar questions about cross-national differences, respondents on average attributed 20% of cognitive differences to genes. On the U.S. question, there were 86 responses, and on the others, there were between 46 and 64 responses.
Steve Sailer’s blog was rated highest for accuracy in reporting on intelligence research—by far, not even in the ballpark of sources that got more ratings (those sources being exactly every mainstream English-language publication that was asked about). It was rated by 26 respondents.
The underlying data isn’t available, but this is all consistent with the (known) existence of a contingent of ISIR conference attendees who are likely to follow Sailer’s blog and share strong, idiosyncratic views on specifically U.S. racial differences in intelligence. The survey is not a credible indicator of expert consensus.
(More cynically, this contingent has a history of going to lengths to make their work appear more mainstream than it is. Overrepresenting them was a predictable outcome of distributing this survey. Heiner Rindermann, the first author on these papers, can hardly have failed to consider that. Of course, what you make of that may hinge on how legitimate you think their work is to begin with. Presumably they would argue that the mainstream goes to lengths to make their work seem fringe.)
No, Dario Amodei and Paul Christiano were at the time employed by OpenAI, the recipient of the $30M grant. They were associated with Open Philanthropy in an advisory role.
I’m not trying to voice an opinion on whether this particular grant recommendation was unprincipled. I do think that things like this undermine trust in EA institutions, set a bad example, and make it hard to get serious concerns heard. Adopting a standard of avoiding appearance of impropriety can head off these concerns and relieve us of trying to determine on a case-by-case basis how fishy something is (without automatically accusing anyone of impropriety).
I’m mainly referring to this, at the bottom:
OpenAI researchers Dario Amodei and Paul Christiano are both technical advisors to Open Philanthropy and live in the same house as Holden. In addition, Holden is engaged to Dario’s sister Daniela.
Holden is Holden Karnofsky, at the time OP’s Executive Director, who also joined OpenAI’s board as part of the partnership initiated by the grant. Presumably he wasn’t the grant investigator (not named), just the chief authority of their employer. OP’s description of their process does not suggest that he or the OP technical advisors from OpenAI held themselves at any remove from the investigation or decision to recommend the grant:
OpenAI initially approached Open Philanthropy about potential funding for safety research, and we responded with the proposal for this grant. Subsequent discussions included visits to OpenAI’s office, conversations with OpenAI’s leadership, and discussions with a number of other organizations (including safety-focused organizations and AI labs), as well as with our technical advisors.
Does it? The Doing EA Better post made it sound like conflict-of-interest statements are standard (or were at one point), but recusal is not, at least for the Long-Term Future Fund. There’s also this Open Philanthropy OpenAI grant, which is infamous enough that even I know about it. That was in 2017, though, so maybe it doesn’t happen anymore.
More specifically, EA shows a pattern of prioritising non-peer-reviewed publications – often shallow-dive blogposts[36] – by prominent EAs with little to no relevant expertise.
This is my first time seeing the “climate change and longtermism” report at that last link. Before having read it, I imagined the point of having a non-expert “value-aligned” longtermist applying their framework to climate change would be things like
a focus on the long-run effects of climate change
a focus on catastrophic scenarios that may be very unlikely but difficult to model or quantify
Instead, the report spends a lot of time on
recapitulation of consensus modeling (to be clear, this is a good thing that’s surprisingly hard to come by), which mainly goes out to 2100
plausible reasons models may be biased towards negative outcomes, particularly in the most likely scenarios
The two are interwoven, which weakens the report even as a critical literature review. When it comes to particular avenues for catastrophe, the analysis is often perfunctory and dismissive. It comes off less as a longtermist perspective on climate change than as having an insider evaluate the literature because only “we” can be trusted to reason well.
I don’t know how canonical that report has become. The reception in the thread where it was posted looks pretty critical, and I don’t mean to pile on. I’m commenting because this post links the report in a way that looks like a backhanded swipe, so once I read it myself I felt it was worth sketching out my reaction a bit further.
Appearance of impropriety
Both examples show how we can act to reduce the catastrophe rate over time, but there are also 3 key risk factors applying upward pressure on the catastrophe rate:
The lingering nature of present threats
Our ongoing ability to generate new threats
Continuously lowering barriers to entry/access
In the case of AI, its usually viewed that AI will be aligned or misaligned, meaning this risk is either be solved or not. It’s also possible that AI may be aligned initially, and become misaligned later[11]. The need for ongoing protection from bad AI would therefore be ongoing. In this scenario we’d need systems in place to stop AI being misappropriated or manipulated, similar to how we guard nuclear weapons from dangerous actors. This is what I term “lingering risk”.
I just want to flag one aspect of this I haven’t seen mentioned, which is that much of this lingering risk naturally grows with population, since you have more potential actors. If you have acceptable risk per century with 10 BSL-4 labs, the risk with 100 labs might be too much. If you have acceptable risk with one pair of nuclear rivals in cold war, a 20-way cold war could require much heavier policing to meet the same level of risk. I expanded on this in a note here.
[2023-01-19 update: there’s now an expanded version of this comment here.]
Note: I’ve edited this comment after dashing it off this morning, mainly for clarity.
Sure, that all makes sense. I’ll think about spending some more time on this. In the meantime I’ll just give my quick reactions:
On reluctance to be extremely confident—I start to worry when considerations like this dictate that one give a series of increasingly specific/conjunctive scenarios roughly the same probability. I don’t expect a forum comment or blog post to get someone to such high confidence, but I don’t think it’s beyond reach.
We also have different expectations for AI, which may in the end make the difference.
I don’t expect machine learning to help much, since the kinds of structures in question are very far out of domain, and physical simulation has some intrinsic hardness problems.
I don’t think it’s correct to say that we haven’t tried yet.
Some of the threads I would pull on if I wanted to talk about feasibility, after a relatively recent re-skim:
We’ve done many simulations and measurements of nanoscale mechanical systems since 1992. How does Nanosystems hold up against those?
For example, some of the best-case bearings (e.g. multi-walled carbon nanotubes) seem to have friction worse than Drexler’s numbers by orders of magnitude. Why is that?
Edges also seem to be really important in nanoscale friction, but this is a hard thing to quantify ab initio.
I think there’s an argument using the Akhiezer limit on products that puts tighter upper bounds on dissipation for stiff components, at least at “moderate” operating speeds. This is still a pretty high bound if it can be reached, but dissipation (and cooling) are generally weak points in Nanosystems.
I don’t recall discussion of torsional rigidity of components. I think you can get a couple orders of magnitude over flagellar motors with CNTs, but you run into trouble beyond that.
Nanosystems mainly considers mechanical properties of isolated components and their interfaces. If you look at collective motion of the whole, everything looks much worse. For example, stiff 6-axis positional control doesn’t help much if the workpiece has levered fluctuations relative to the assembler arm.
Similarly, in collective motion, non-bonded interfaces should be large contributors to phonon radiation and dissipation.
Due to surface effects, just about anything at the nanoscale can be piezoelectric/flexoelectric with a strength comparable to industrial workhorse bulk piezoelectrics. This can dramatically alter mechanical properties relative to the continuum approximation. (Sometimes in a favorable direction! But it’s not clear how accurate simulations are, and it’s hard to set up experiments.)
Current ab initio simulation methods are accurate only to within a few percent on “easy” properties like electric dipole moments (last I checked). Time-domain simulations are difficult to extend beyond picoseconds. What tolerances do you need to make reliable mechanisms?
In general I wouldn’t be surprised if a couple orders of magnitude in productivity over biological systems were physically feasible for typically biological products (that’s closer to my 1% by 2040 scenario). Broad-spectrum utility is much harder, as is each further step in energy efficiency or speed.
- Nov 23, 2024, 4:57 PM; 3 points) 's comment on Noosphere89′s Shortform by (LessWrong;
I found this post helpful, since lately I’ve been trying to understand the role of molecular nanotechnology in EA and x-risk discussions. I appreciate your laying out your thinking, but I think full-time effort here is premature.
Overall, then, adding the above probabilities implies that my guess is that there’s a 4-5% chance that advanced nanotechnology arrives by 2040. Again, this number is very made up and not stable.
This sounds astonishingly high to me (as does 1-2% without TAI). My read is that no research program active today leads to advanced nanotechnology by 2040. Absent an Apollo program, you’d need several serial breakthroughs from a small number of researchers. Echoing Peter McCluskey’s comment, there’s no profit motive or arms race to spur such an investment. I’d give even a megaproject slim odds—all these synthesis methods, novel molecules, assemblies, information and power management—in the span of three graduate student generations? Simulations are too computationally expensive and not accurate enough to parallelize much of this path. I’d put the chance below 1e-4, and that feels very conservative.
Here’s a quick attempt to brainstorm considerations that seem to be feeding into my views here: “Drexler has sketched a reasonable-looking pathway and endpoint”, “no-one has shown X isn’t feasible even though presumably some people tried”
Scientists convince themselves that Drexler’s sketch is infeasible more often than one might think. But to someone at that point there’s little reason to pursue the subject further, let alone publish on it. It’s of little intrinsic scientific interest to argue an at-best marginal, at-worst pseudoscientific question. It has nothing to offer their own research program or their career. Smalley’s participation in the debate certainly didn’t redound to his reputation.
So there’s not much publication-quality work contesting Nanosystems or establishing tighter upper bounds on maximum capabilities. But that’s at least in part because such work is self-disincentivizing. Presumably some arguments people find sufficient for themselves wouldn’t go through in generality or can’t be formalized enough to satisfy a demand for a physical impossibility proof, but I wouldn’t put much weight on the apparent lack of rebuttals.
Interesting, thanks. I read Nanosystems as establishing a high upper bound. I don’t see any of its specific proposals as plausibly workable enough to use as a lower bound in the sense that, say, a ribosome is a lower bound, but perhaps that’s not what Eliezer means.
The concrete example I usually use here is nanotech, because there’s been pretty detailed analysis of what definitely look like physically attainable lower bounds on what should be possible with nanotech, and those lower bounds are sufficient to carry the point.
It sounds like this is well-traveled ground here, but I’d appreciate a pointer to this analysis.
I added a more mathematical note at the end of my post showing what I mean by (2). I think in general it’s more coherent to treat trajectory problems with dynamic programming methods rather than try to integrate expected value over time.
I’ll answer my own question a bit:
Scattered critiques of longtermism exist, but are generally informal, tentative, and limited in scope. This recent comment and its replies were the best directory I could find.
A longtermist critique of “The expected value of extinction risk reduction is positive”, in particular, seems to be the best expression of my worry (1). My points about near-threshold lives and procrastination are another plausible story by which extinction risk reduction could be negative in expectation.
There’s writing about Pascalian reasoning (a couple that came up repeatedly were A Paradox for Tiny Probabilities and Enormous Values, In defence of fanaticism).
I vaguely recall a named paradox, maybe involving “procrastination” or “patience”, about how an immortal investor never cashes in—and possibly that this was a standard answer to Pascal’s wager/mugging together with some larger (but still tiny) probability of, say, getting hit by a meteor while you’re making the bet. Maybe I just imagined it.
I think FQxI usually gets around 200 submissions for its essay contests where the entire pot is less than the first prize here. I wouldn’t be surprised if Open Phil got over 100 submissions.