Most of my stuff (even the stuff of interest to EAs) can be found on LessWrong: https://www.lesswrong.com/users/daniel-kokotajlo
kokotajlod
Thanks for this thoughtful and detailed deep dive!
I think it misses the main cruxes though. Yes, some people (Drexler and young Yudkowsky) thought that ordinary human science would get us all the way to atomically precise manufacturing in our lifetimes. For the reasons you mention, that seems probably wrong.
But the question I’m interested in is whether a million superintelligences could figure it out in a few years or less. (If it takes them, say, 10 years or longer, then probably they’ll have better ways of taking over the world) Since that’s the situation we’ll actually be facing.To answer that question, we need to ask questions like
(1) Is it even in principle possible? Is there some configuration of atoms, that would be a general-purpose nanofactory, capable of making more of itself, that uses diamandoid instead of some other material? Or is there no such configuration?
Seems like the answer is “Probably, though not necessarily; it might turn out that the obstacles discussed are truly insurmountable. Maybe 80% credence.” If we remove the diamandoid criterion and allow it to be built of any material (but still require it to be dramatically more impressive and general-purpose / programmable than ordinary life forms) then I feel like the credence shoots up to 95%, the remaining 5% being model uncertainty.
(2) Is it practical for an entire galactic empire of superintelligences to build in a million years? (Conditional on 1, I think the answer to 2 is ‘of course.’)
(3) OK, conditional on the above, the question becomes what the limiting factor is—is it genius insights about clever binding processes or mini-robo-arm-designs exploiting quantum physics to solve the stickiness problems mentioned in this post? Is it mucking around in a laboratory performing experiments to collect data to refine our simulations? Is it compute & sim-algorithms, to run the simulations and predict what designs should in theory work? Genius insights will probably be pretty cheap to come by for a million superintelligences. I’m torn about whether the main constraint will be empirical data to fit the simulations, or compute to run the simulations.
(4) What’s our credence distribution over orders of magnitude of the following inputs: Genius, experiments, and compute, in each case assuming that it’s the bottleneck? Not sure how to think about genius, but it’s OK because I don’t think it’ll be the bottleneck. Our distributions should range over many orders of magnitude, and should update on our observation so far that however many experiments and simulations humans have done didn’t seem close to being enough.
I wildly guess something like 50% that we’ll see some sort of super powerful nanofactory-like thing. I’m more like 5% that it consists of diamandoid in particular, there are so many different material designs and even if diamandoid is viable and in some sense theoretically the best, the theoretical best probably takes several OOMs more inputs to achieve than something else which is just merely good enough.
I feel like it was only a year or so ago that the standard critique of the AI safety community was that they were too abstract, too theoretical, that they lacked hands-on experience, lacked contact with empirical reality, etc...
Thanks for this! I think my own experience has led to different lessons in some cases (e.g. I think I should have prioritised personal fit less and engaged less with people outside the EA community), but I nevertheless very much approve of this sort of public reflection.
EA has a high deference culture? Compared to what other cultures? Idk but I feel like the difference between EA and other groups of people I’ve been in (grad students, City Year people, law students...) may not be that EAs defer more on average but rather that they are much more likely to explicitly flag when they are doing so. In EA the default expectation is that you do your own thinking and back up your decisions and claims with evidence*, and deference is a legitimate source of evidence so people cite it. But in other communities people would just say “I think X” or “I’m doing X” and not bother to explain why (and perhaps not even know why, because they didn’t really think that much about it).
*Other communities have this norm too, I think, but not to the same extent.
He more recently mentioned that he noticed “people continuously vanishing higher into the tower,” that is, focusing on more abstract and harder to evaluate issues, and that very few people have done the opposite. One commenter, Ben Weinstein-Raun, suggested several reasons, among them that longer-loop work is more visible, and higher status.
I disagree that longer-loop work is more visible and higher status, I think the opposite is true. In AI, agent foundations researchers are less visible and lower status than prosaic AI alignment researchers, who are less visible and lower status than capabilities researchers. In my own life, I got a huge boost of status & visibility when I did less agent foundationsy stuff and more forecasting stuff (timelines, takeoff speeds, predicting ML benchmarks, etc.).
FWIW I think that it’s pretty likely that AGI etc. will happen within 10 years absent strong regulation, and moreover that if it doesn’t, the ‘crying wolf’ effect will be relatively minor, enough that even if I had 20-year medians I wouldn’t worry about it compared to the benefits.
Beat me to it & said it better than I could.
My now-obsolete draft comment was going to say:
It seems to me that between about 2004 and 2014, Yudkowsky was the best person in the world to listen to on the subject of AGI and AI risks. That is, deferring to Yudkowsky would have been a better choice than deferring to literally anyone else in the world. Moreover, after about 2014 Yudkowsky would probably have been in the top 10; if you are going to choose 10 people to split your deference between (which I do not recommend, I recommend thinking for oneself), Yudkowsky should be one of those people and had you dropped Yudkowsky from the list in 2014 you would have missed out on some important stuff. Would you agree with this?
On the positive side, I’d be interested to see a top ten list from you of people you think should be deferred to as much or more than Yudkowsky on matters of AGI and AI risks.*
*What do I mean by this? Idk, here’s a partial operationalization: Timelines, takeoff speeds, technical AI alignment, and p(doom).
[ETA: lest people write me off as a Yudkowsky fanboy, I wish to emphasize that I too think people are overindexing on Yudkowsky’s views, I too think there are a bunch of people who defer to him too much, I too think he is often overconfident, wrong about various things, etc.]
[ETA: OK, I guess I think Bostrom probably was actually slightly better than Yudkowsky even on 20-year timespan.]
[ETA: I wish to reemphasize, but more strongly, that Yudkowsky seems pretty overconfident not just now but historically. Anyone deferring to him should keep this in mind; maybe directly update towards his credences but don’t adopt his credences. E.g. think “we’re probably doomed” but not “99% chance of doom” Also, Yudkowsky doesn’t seem to be listening to others and understanding their positions well. So his criticisms of other views should be listened to but not deferred to, IMO.]
Hi Ajeya! I”m a huge fan of your timelines report, it’s by far the best thing out there on the topic as far as I know. Whenever people ask me to explain my timelines, I say “It’s like Ajeya’s, except...”
My question is, how important do you think it is for someone like me to do timelines research, compared to other kinds of research (e.g. takeoff speeds, alignment, acausal trade...)
I sometimes think that even if I managed to convince everyone to shift from median 2050 to median 2032 (an obviously unlikely scenario!), it still wouldn’t matter much because people’s decisions about what to work on are mostly driven by considerations of tractability, neglectedness, personal fit, importance, etc. and even that timelines difference would be a relatively minor consideration. On the other hand, intuitively it does feel like the difference between 2050 and 2032 is a big deal and that people who believe one when the other is true will probably make big strategic mistakes.Bonus question: Murphyjitsu: Conditional on TAI being built in 2025, what happened? (i.e. how was it built, what parts of your model were wrong, what do the next 5 years look like, what do the 5 years after 2025 look like?)
I haven’t considered all of the inputs to Cotra’s model, most notably the 2020 training computation requirements distribution. Without forming a view on that, I can’t really say that ~53% represents my overall view.
Sorry to bang on about this again and again, but it’s important to repeat for the benefit of those who don’t know: The training computation requirements distribution is by far the biggest cruxy input to the whole thing; it’s the input that matters most to the bottom line and is most subjective. If you hold fixed everything else Ajeya inputs, but change this distribution to something I think is reasonable, you get something like 2030 as the median (!!!) Meanwhile if you change the distribution to be even more extreme than Ajeya picked, you can push timelines arbitrarily far into the future.
Investigating this variable seems to have been beyond scope for the XPT forecasters, so this whole exercise is IMO merely that—a nice exercise, to practice for the real deal, which is when you think about the compute requirements distribution.
I’m so excited to see this go live! I’ve learned a lot from it & consider it to do for takeoff speeds what Ajeya’s report did for timelines, i.e. it’s an actual fucking serious-ass gears-level model, the best that exists in the world for now. Future work will critique it and build off it rather than start from scratch, I say. Thanks Tom and Epoch and everyone else who contributed!
I strongly encourage everyone reading this to spend 10min playing around with the model, trying out different settings, etc. For example: Try to get it to match what you intuitively felt like timelines and takeoff would look like, and see how hard it is to get it to do so. Or: Go through the top 5-10 variables one by one and change them to what you think they should be (leaving unchanged the ones about which you have no opinion) and then see what effect each change has.
Almost two years ago I wrote this story of what the next five years would look like on my median timeline. At the time I had the bio anchors framework in mind with a median training requirements of 3e29. So, you can use this takeoff model as a nice complement to that story:
Go to takeoffspeeds.com and load the preset: best guess scenario.
Set AGI training requirements to 3e29 instead of 1e36
(Optional) Set software returns to 2.5 instead of 1.25 (I endorse this change in general, because it’s more consistent with the empirical evidence. See Tom’s report for details & decide whether his justification for cutting it in half, to 1.25, is convincing.)
(Optional) Set FLOP gap to 1e2 instead of 1e4 (In general, as Tom discusses in the report, if training requirements are smaller then probably the FLOP gap is smaller too. So if we are starting with Tom’s best guess scenario and lowering the training requirements we should also lower the FLOP gap.)
The result:
In 2024, 4% of AI R&D tasks are automated; then 32% in 2026, and then singularity happens around when I expected, in mid 2028. This is close enough to what I had expected when I wrote the story that I’m tentatively making it canon.
Oh, also, a citation about my contribution to this post (Tom was going to make this a footnote but ran into technical difficulties): The extremely janky graph/diagram was made by me in may 2021, to help explain Ajeya’s Bio Anchors model. The graph that forms the bottom left corner came from some ARK Invest webpage which I can’t find now.- 24 Jan 2023 20:51 UTC; 11 points) 's comment on What a compute-centric framework says about AI takeoff speeds by (
- 23 Mar 2023 11:55 UTC; 11 points) 's comment on Greg_Colbourn’s Quick takes by (
- 26 Jan 2023 8:34 UTC; 6 points) 's comment on Beware safety-washing by (
It would be cool to give this survey to top AI forecasters and see what the corresponding graphs look like. E.g. get superforecasters-who-have-forecast-on-AI-questions, or maybe top metaculus users, etc.
Good point, I’ll add analogy to the list. Much that is called reference class forecasting is really just analogy, and often not even a good analogy.
I really think we should taboo “outside view.” If people are forced to use the term “reference class” to describe what they are doing, it’ll be more obvious when they are doing epistemically shitty things, because the term “reference class” invites the obvious next questions: 1. What reference class? 2. Why is that the best reference class to use?
Thanks for this. For my part, I have a daughter who is almost 1 year old now. I endorse / also experienced pretty much everything you describe here, e.g. I didn’t change much as a person either.
The sleeping in shifts thing sounds good. I wish we had done something like that. Instead, I just did all the night feedings, and also took care of the baby for most of the day most days until we had childcare. It sucked. I was constantly sleep-deprived for six months or so, and I still don’t get as much sleep as I used to.
Taking leave is super important. Neither I nor my wife took leave; I just worked less hard on my dissertation and other responsibilities. (Well, my wife took one week off from her classes, but she had to make it up later.) My productivity crashed, and I became unhappy trying to do too many things at once without sleep.
We stopped breastfeeding after three months because my wife had to study for exams. I thought that it wouldn’t be too hard to get the baby back to breast afterwards. I was wrong; we never got the baby back to breast and had to pump thereafter.
As a philosopher, I don’t think I’d agree that there is no crazy town. Plenty of lines of argument really do lead to absurd conclusions, and if you actually followed through you’d be literally crazy. For example, you might decide that you are probably a boltzmann brain and that the best thing you can do is think happy thoughts as hard as you can because you are about to be dissolved into nothingness. Or you might decide that an action is morally correct iff it maximizes expected utility, but because of funnel-shaped action profiles every action has undefined expected utility and so every action is morally correct.
What I’d say instead of “there is no crazy town” is that the train line is not a single line but a tree or web, and when people find themselves at a crazy town they just backtrack and try a different route. Different people have different standards for what counts as a crazy town; some people think lots of things are crazy and so they stay close to home. Other people have managed to find long paths that seem crazy to some but seem fine to them.
Since urban and rural areas rely critically on each other for resources, it is unlikely that an urban-rural war could be logistically feasible.
People keep saying this as an argument for why we won’t have a civil war, but it seems pretty weak to me:
1. Logistical problems mean a war would end quickly, not that it would never happen at all. And a civil war that ends quickly would IMO be almost as bad as one that takes longer to end.
2. The previous US civil war was not an urban/rural divide. But plenty of modern civil wars are; it’s pretty standard, in fact, for a central government controlling the major cities to wage war for several years against insurgents controlling much of the countryside.
As for the cultural revolution: As far as I can tell it wasn’t actually very top-down organized. It was sparked and to some extent directed by revered leaders like Mao, but on numerous occasions even the leaders couldn’t control the actions of the students. There were loads of cases of different sects of Red Guards fighting street battles with each other—not the sort of behavior you’d expect from a top-down movement!
What I’d like to learn about is the culture in china before the massacres began. Were people suspected of being rightists, counter-revolutionaries, landlords, etc. being deplatformed, harassed, fired, etc. prior to the massacres? Was there an uptick in this sort of thing in the years prior to the massacres?
For personal fit stuff: I agree that for intellectual work, personal fit is very important. It’s just that I have discovered, almost by accident, that I have more personal fit than I realized for things I wasn’t trained in. (You may have made a similar discovery?) Had I prioritized personal fit less early on, I would have explored more. I still wonder what sorts of things I could be doing by now if I had tried to reskill instead of continuing in philosophy. Yeah, maybe I would have discovered that I didn’t like it and gone back to philosophy, but maybe I would have discovered that I loved it. I guess this isn’t against prioritizing personal fit per se, but against how past-me interpreted the advice to prioritize personal fit.
For engaging with people outside EA: I went to a philosophy PhD program and climbed the conventional academic hierarchy for a few years. I learned a bunch of useful stuff, but I also learned a bunch of useless stuff, and a bunch of stuff which is useful but plausibly not as useful as what I would have learned working for an EA org. When I look back on what I accomplished over the last five years, almost all of the best stuff seems to be things I did on the side, extracurricular from my academic work. (e.g. doing internships at CEA etc.) I also made a bunch of friends outside EA, which I agree is nice in several ways (e.g. the ones you mention) but to my dismay I found it really hard to get people to lift a finger in the direction of helping the world, even if I could intellectually convince them that e.g. AI risk is worth taking seriously, or that the critiques and stereotypes of EA they heard were incorrect. As a counterpoint, I did have interactions with several dozen people probably, and maybe I caused more positive change than I could see, especially since the world’s not over yet and there is still time for the effects of my conversations to grow. Still though: I missed out on several year’s worth of EA work and learning by going to grad school; that’s a high opportunity cost.
As for learning things myself: I heard a lot of critiques of EA, learned a lot about other perspectives on the world, etc. but ultimately I don’t think I would be any worse off in this regard if I had just gone into an EA org for the past five years instead of grad school.
I think that insofar as people are deferring on matters of AGI risk etc., Yudkowsky is in the top 10 people in the world to defer to based on his track record, and arguably top 1. Nobody who has been talking about these topics for 20+ years has a similarly good track record. If you restrict attention to the last 10 years, then Bostrom does and Carl Shulman and maybe some other people too (Gwern?), and if you restrict attention to the last 5 years then arguably about a dozen people have a somewhat better track record than him.
(To my knowledge. I think I’m probably missing a handful of people who I don’t know as much about because their writings aren’t as prominent in the stuff I’ve read, sorry!)
He’s like Szilard. Szilard wasn’t right about everything (e.g. he predicted there would be a war and the Nazis would win) but he was right about a bunch of things including that there would be a bomb, that this put all of humanity in danger, etc. and importantly he was the first to do so by several years.
I think if I were to write a post cautioning people against deferring to Yudkowsky, I wouldn’t talk about his excellent track record but rather about his arrogance, inability to clearly explain his views and argue for them (at least on some important topics, he’s clear on others), seeming bias towards pessimism, ridiculously high (and therefore seemingly overconfident) credences in things like p(doom), etc. These are the reasons I would reach for (and do reach for) when arguing against deferring to Yudkowsky.
[ETA: I wish to reemphasize, but more strongly, that Yudkowsky seems pretty overconfident not just now but historically. Anyone deferring to him should keep this in mind; maybe directly update towards his credences but don’t adopt his credences. E.g. think “we’re probably doomed” but not “99% chance of doom” Also, Yudkowsky doesn’t seem to be listening to others and understanding their positions well. So his criticisms of other views should be listened to but not deferred to, IMO.]
I think “Humanity is going to bumble along as it always has” is not a realistic alternative; the Long Reflection is motivated by the worry that that won’t happen by default. Instead, we’ll all die, or end up in one of the various dystopian scenarios people talk about, e.g. the hardscrapple frontier, the disneyland with no children, some of the darker Age of Em stuff… (I could elaborate if you like). If we want humanity to continue bumbling on, we need to do something to make that happen, and the Long Reflection is a proposal for how to do that.
Thanks for this critique! I agree this is an important subject that is relatively understudied compared to other aspects of the problem. As far as I can tell there just isn’t a science of takeover; there’s military science and there’s the science of how to win elections in a democracy and there’s a bit of research and a few books on the topic of how to seize power in a dictatorship… but for such an important subject when you think about it it’s unfortunate that there isn’t a general study of how agents in multi-agent environments accumulate influence and achieve large-scale goals over long time periods.
I’m going to give my reactions below as I read:These passages seem to imply that the rate of scientific progress is primarily limited by the number and intelligence level of those working on scientific research. It is not clear, however, that the evidence supports this.
I mean it’s clearly more than JUST the number and intelligence of the people involved, but surely those are major factors! Piece of evidence: Across many industries performance on important metrics (e.g. price) seems to predictably improve exponentially with investment/effort (this is called experience curve effect). Another piece of evidence: AlphaFold 2.
Later you mention the gradual accumulation of ideas and cite the common occurrence of repeated independent discoveries. I think this quite plausible. But note that a society of AIs would be thinking and communicating much faster than a society of humans, so the process of ideas gradually accumulating in their society would also be sped up.Frist, though the actual model training was rapid, the entire process of developing Alpha Zero was far more protracted. Focusing on the day of training presents a highly misleading picture of the actual rate of progress of this particular example.
Sure, and similarly if AI R&D ability is like AI Go ability, there’ll be a series of better and better AIs over the course of many years that gradually get better at various aspects of R&D, until one day an AI is trained that is better than the most brilliant genius scientists. I actually expect things to be slower and more smoothed out than this, probably, because training will take more like a year. This is all part of the standard picture of AI takeover, not an objection to it.
Second, Go is a fully-observable, discrete-time, zero-sum, two-player board game.
I agree that the real world is more complex etc. and that just doing the same sort of self-play won’t work. There may be more sophsiticated forms of self-play that work though. Also you don’t need self-play to be superhuman at something, e.g. you could use decision transformers + imitation learning.
These all take time to develop and put into place, which is why the development of novel technologies takes a long time. For example, the Lockheed Martin F-35 took about fifteen years from initial design to scale production. The Gerald R. Ford aircraft carrier took about ten years to build and fit out. Semiconductor fabrication plants cost billions of dollars, and the entire process from the design of a chip to manufacturing takes years. Given such examples, it seems reasonable to expect that even a nascent AGI would require years to design and build a functioning nanofactory. Doing so in secret or without outside interference would be even more difficult given all the specialised equipment, raw materials, and human talent that would be needed. A bunch of humans hired online cannot simply construct a nanofactory from nothing in a few months, regardless of how advanced is the AGI overseeing the process.
I’d be interested to hear your thoughts on this post which details a combination of “near-future” military technologies. Perhaps you’ll agree that the technologies on this list could be built in a few months or years by a developed nation with the help of superintelligent AI? Then the crux would be whether this tech would allow that nation to take over the world. I personally think that military takeover scenarios are unlikely because there are much easier and safer methods, but I still think military takeover is at least on the table—crazier things have happened in history.
That said, I don’t concede the point—You are right that it would take modern humans many years to build nanofactories etc. but I don’t think this is strong evidence that a superintelligence would also take many years. Consider video games and speedrunning. Even if speedrunners don’t allow themselves to use bugs/exploits, they still usually go significantly faster than reasonably good players. Consider also human engineers building something that is well-understood already how to build vs. building something for the first time ever. The point is, if you are really smart and know what you are doing, you can do stuff much faster. You said that a lot of experimentation and experience is necessary—well, maybe it’s not. In general there’s a tradeoff between smarts and experimentation/experience; if you have more of one you need less of the other to reach the same level of performance. Maybe if you crank up smarts to superintelligence level—so intelligent that the best human geniuses seem a rounding error away from the average—you can get away with orders of magnitude less experimentation/experience. Not for everything perhaps, but for some things. Suppose there are N crazy sci-fi technologies that an AI could use to get a huge advantage: nanofactories, fusion, quantum shenanigans, bioengineering … All it takes if for 1 of them to be such that you can mostly substitute superintelligence for experimentation. And also you can still do experimentation, and you can do it much faster than humans do it too because you know what you are doing. Instead of toying around until hypotheses gradually coalesce in your brain, you can begin with a million carefully crafted hypotheses consistent with all the evidence you’ve seen so far and an experiment regime designed to optimally search through the space of hypotheses as fast as possible.
I expect it to take somewhere between a day and five years to go from what you might call human-level AI to nanobot swarms. Perhaps this isn’t that different from what you think? (Maybe you’d say something like 3 to 10 years?)
Relying on a ‘front man’ to serve as the face of the AGI would be highly dangerous, as the AGI would become dependent on this person for ensuring the loyalty of its followers. Of course one might argue that a combination of bribery and threats could be sufficient, but this is not the primary means by which successful leaders in history have obtained obedience and popularity, so an AGI limited to these tools would be at a significant disadvantage. Furthermore, an AGI reliant on control over money is susceptible to intervention by government authorities to freeze assets and hamper the transfer of funds. This would not be an issue if the AGI had control over its own territory, but then it would be subject to blockade and economic sanctions. For instance, it would take an AGI considerable effort to acquire the power of Vladimir Putin, and yet he is still facing considerable practical difficulties in exerting his will on his own (and neighbouring) populations without the intervention of the rest of the world. While none of these problems are necessarily insuperable, I believe they are significant issues that must be considered in an assessment of the plausibility of various AI takeover scenarios.
History has many examples of people ruling from behind the throne, so to speak. Often they have no official title whatsoever, but the people with the official titles are all loyal to them. Sometimes the people with the official titles do rebel and stop listening to the power behind the throne, and then said power behind the throne loses power. Other times, this doesn’t happen.
AGI need not rule from behind the scenes though. If it’s charismatic enough it can rule over a group of Blake Lemoines. Have you seen the movie Her? Did you find the behavior of the humans super implausible in that movie—no way they would form personal relationships with an AI, no way they would trust it?It is also unclear how an AGI would gain the skills needed to manipulate and manage large numbers of humans in the first place. It is by no means evident why an AGI would be constructed with this capability, or how it would even be trained for this task, which does not seem very amenable to traditional reinforcement learning approaches. In many discussions, an AGI is simply defined as having such abilities, but it is not explained why such skills would be expected to accompany general problem-solving or planning skills. Even if a generally competent AGI had instrumental reasons to develop such skills, would it have the capability of doing so? Humans learn social skills through years of interaction with other humans, and even then, many otherwise intelligent and wealthy humans possess such skills only to a minimal degree. Unless a credible explanation can be given as to how such an AI would acquire such skills or they why should necessarily follow from broader capabilities, I do not think it is reasonable to simply define an AGI as possessing them, and then assuming this as part of a broader takeover narrative. This presents a major issue for takeover scenarios which rely on an AGI engaging large numbers of humans in its employment for the development of weapons or novel technologies.
It currently looks like most future AIs, and in particular AGIs, will have been trained on reading the whole internet & chatting to millions of humans over the course of several months. So, that’s how they’ll gain those skills.
(But also, if you are really good at generalizing to new tasks/situations, maybe manipulation of humans is one of the things you can generalize to. And if you aren’t really good at generalizing to new tasks/situations, maybe you don’t count as AGI.)
So far all I’ve done is critique your arguments but hopefully one day I’ll have assembled some writing laying out my own arguments on this subject.
Anyhow, thanks again for writing this! I strongly disagree with your conclusions but I’m glad to see this topic getting serious & thoughtful attention.
My friend Cullen once said something like “It’s good for the world to have at least one group of people committed to doing good as such.” At first I was like “Why?” but now I think I understand.
In war, it’s generally a good idea to hold back some of your force as reserves. That way as the battle progresses and you get more information about which parts are doing well and poorly, you can send in the reserves to wherever they are needed most.
In the War On Bad Things, EAs are the reserves. They are much more capable of pivoting to different cause areas, projects, etc. as needed, and they are explicitly trying to go where they are most needed (as opposed to most other groups, which are doing the equivalent of trying to take hill X or hold line Z or whatever)