I’m a senior research scholar at FHI, with a background in theoretical physics.
djbinder
Thanks for pointing this out, but unfortunately we cannot shift the submission deadline.
Request for proposals: Help Open Philanthropy quantify biological risk
I agree with with your first question, the utilitarian needs a measure (they don’t need a separate utility function from their measure, but there may be other natural measures to consider in which case you do need a utility function).
With respect to your second question, I think you can either give up on the infinite cases (because you think they are “metaphysically” impossible, perhaps) or you demand that a regularization must exist (because with this the problem is “metaphysically” underspecified). I’m not sure what the correct approach is here, and I think it is an interesting question to try and understand this in more detail. On the latter case you have to give up impartiality, but only in a fairly benign way, and that our intuitions about impartiality are probably wrong here (analogous situations occur in physics with charge conservation, as I noted in another comment).
With respect to your third question, I think it is likely that problems with no regularization are nonsensical. This is not to say that all problems involving infinities are themselves nonsense, nor to say that correct choice of regularization is obvious.
As an intuition pump maybe we can consider cases that don’t involve infinities. Say we are in (rather contrived world) in which utility is literally a function of spacetime, and we integrate to get the total utility. How should I assign utility for a function which has support on a nonmeasurable set? Should I even think such a thing is possible? After all, the existence of nonmeasuarable sets follows not from ZF alone, but requires also the axiom of choice. As another example, maybe my utility function depends on whether or not the continuum hypothesis is true or false. How should I act in this case?
My own guess is that such questions likely have no meaningful answer, and I think the same is true for questions involving infinities without specified ways to operationalize the infinities. I think it would be odd to give up on the utilitarian dream due to unmeasurable sets, and that the same is true for illdefined infinities.
I think you are right about infinite sets (most of the mathematicians I’ve talked to have had distinctly negative views about set theory, in part due to the infinities, but my guess is that such views are more common amongst those working on physicsadjacent areas of research). I was thinking about infinities in analysis (such as continuous functions, summing infinite series, integration, differentiation, and so on), which bottom out in some sort of limiting process.
On the spatially unbounded universe example, this seems rather analogous to me to the question of how to integrate functions over the same space. There are a number of different sets of functions which are integrable over , and even for some functions which are not integrable over there are natural regularization schemes which allows the integral to be defined. In some cases these regularizations may even allow a notion of comparing different “infinities”, as in cases where the integral diverges as the regularizer is taken to zero one integral may strictly dominate the other. When dealing with situations in ethics, perhaps we should always be restricting to these cases? There are a lot of different choices here, and it isn’t clear to me what the correct restriction is, but it seems plausible to me that some form of restriction is needed. Note that such a restrictions include ultrafinitism, as an extreme case, but in general allows a much richer set of possibilities.
Expansionism is neceessarily incomplete, it assumes that the world has a specific causal structure (ie, one that is locally that of special relativity) which is an empirical observation about our universe rather than a logically necessary fact. I think it is plausible that, given the right causal assumptions, expansionism follows (at least for individual observers making decisions that respect causality).
As an aside, while neutralityviolations are a necessary consequence of regularization, a weaker form of neutrality is preserved. If we regularize with some discounting factor so that everything remains finite, it is easy to see that “small rearrangments” (where the amount that a person can move in time is finite) do not change the answer, because the difference goes to zero as . But “big rearrangments” can cause differences that grow with . Such situations do arise in various physical situations, and are interpretted as changes to boundary conditions, whereas the “small rearrangments” manifestly preserve boundary conditions and manifestly do not cause problems with the limit. (The boundary is most easily seen by mapping the infinite interval sequence onto a compact interval, so that “infinity” is mapped to a finite point. “Small rearrangments” leave infinity unchanged, whereas “large” ones will cause a flow of utility across infinity, which is how the two situations are able to give different answers.)
I think what is true is probably something like “neverending process don’t exist, but arbitrarily long ones do”, but I’m not confident. My more general claim is that there can be intermediate positions between ultrafinitism (“there is a biggest number”), and any laissez faire “anything goes” attitude, where infinities appear without care or scrunity. I would furthermore claim (but on less solid ground), that the views of practicing mathematicians and physicists falls somewhere in here.
As to the infinite series examples you give, they are mathematically illdefined without giving a regularization. There is a large literature in mathematics and physics on the question of regularizing infinite series. Regularization and renormalization are used through physics (particular in QFT), and while poorly written textbooks (particularly older ones) can make this appear like voodoo magic, the correct answers can always be rigorously be obtained by making everything finite.
For the situation you are considering, a natural regularization would be to replace your sum with a regularized sum where you discount each time step by some discounting factor . Physically speaking, this is what would happen if we thought the universe had some chance of being destroyed at each timestep; that is, it can be arbitrarily longlived, yet with probability 1 is finite. You can sum the series and then take and thus derive a finite answer.
There may be many other ways to regulate the series, and it often turns out that how you regulate the series doesn’t matter. In this way, it might make sense to talk about this infinite universe without reference to a specific limiting process, but rather potentially with only some weaker limiting process specification. This is what happens, for instance, in QFT; the regularizations don’t matter, all we care about are the things that are independent of regularization, and so we tend to think of the theories as existing without a need for regularization. However, when doing calculations it is often wise to use a specific (if arbitrary) regularization, because it guarantees that you will get the right answer. Without regularizations it is very easy to make mistakes.
This is all a very longwinded way to say that there are at least two intermediate views one could have about these infinite sequence examples, between the “ultrafinitist” and the “anything goes”:

The real world (or your priors) demands some definitive regularization, which determines the right answer. This would be the case if the real world had some probability of being destroyed, even if it is arbitrarily small.

Maybe infinite situations like the one you described are allowed, but require some “equivalence class of regularizations” to be specified in order to be completely specified. Otherwise the answer is as indeterminant as if you’d given me the situation without specifiying the numbers. I think this view is a little weirder, but also the one that seems to be adopted in practice by physicists.

I think Section XIII is too dismissive of the view that infinities are not “real”, conflating it with ultrafinitism. But the sophisticated version of this view is that infinities should only be treated as “idealized limits” of finite processes. This is, as far as understand, the default view amongst practicing mathematicians and physicists. If you stray from it, and use infinities without specifying the limiting process, it is very easy to produce paradoxes, or at least, indeterminancy in the problem. The sophisticated view, then, is not that infinities don’t exist, but that, since they only exist as limiting cases of finite processes. One must always specify the limiting process, and in doing so any paradoxes or indeterminancies will disappear.
As Jaynes’ summarizes in Chapter 15 of Probability: The Logic of Science:
[P]aradoxes caused by careless dealing with infinite sets or limits can be massproduced by the following simple procedure:
(1) Start from a mathematically welldefined situation, such as a finite set, a normalized probability distribution, or a convergent integral, where everything is wellbehaved and there is no question about what is the correct solution.
(2) Pass to a limit – infinite magnitude, infinite set, zero measure, improper pdf, or some other kind – without specifying how the limit is approached.
(3) Ask a question whose answer depends on how the limit was approached.
In principal I agree, although in practice there are other mitigating factors which means it doesn’t seem to be that relevant.
This is partly because the 10^52 number is not very robust. In particular, once you start postulating such large numbers of future people I think you have to take the simulation hypothesis much more seriously, so that the large size of the far future may in fact be illusory. But even on a more mundane level we should probably worry that achieving 10^52 happy lives might be much harder than it looks.
It is partly also because at a practical level the interventions longtermists consider don’t rely on the possibility of 10^52 future lives, but are good even over just the next few hundred years. I am not aware of many things that have smaller impacts and yet still remain robustly positive, such that we would only pursue them due to the 10^52 future lives. This is essentially for the reasons that asolomonr gives in their comment.
Attempts to reject fanatacism necessarily lead to major theoretical problems, as described for instance here and here.
However, questions about fanatacism are not that relevant for most questions about xrisk. The xrisks of greatest concern to most longtermists (AI risk, bioweapons, nuclear weapons, climate change) all have reasonable odds of occurring within the next century or so, and even if we care only about humans living in the next century or so we would find that these are valuable to prevent. This is mostly a consequence of the huge number of people alive today.
A Simple Model of AGI Deployment Risk
The Great Big Book of Horrible Things is a list of the 100 worst manmade events in history, many of which fit your definition of moral catastrophe.
Practices (rather than events) that might fit your definition include
slavery, in its many forms
judicial torture
Apartheid, and racial segregation in other countries such as the USA and Australia
Thanks for the reply Rory! I think at this point it is fairly clear where we agree (quantitative methods and ideas from maths and physics can be helpful in other disciplines) and where we disagree (whether complexity science has new insights to offer, and whether there is a need for an interdisciplinary field doing this work separate from the ones that already exist), and don’t have any more to offer here past my previous comments. And I appreciate your candidness noting that most complexity scientists don’t mention complexity or emergence much in their published research; as is probably clear I think this suggests that, despite their rhetoric, they haven’t managed to make these concepts useful.
I do not think the SFI, at least judging from their website, and from the book Scale which I read a few years ago, is a good model of public relations that EAs should try to emulate. They make grand claims about what they have achieved which seems to me to be out of proportion to their actual accomplishments. I’m curious to hear what you think the great success stories of SFI are. The one I know the most about, the scaling laws, I’m pretty skeptical of for the reasons outlined previously. I had a look at their “Evolution of Human Languages” program, and it seems to be fringe research by the standards of mainstream comparative linguistics. But there could well be success stories that I am unfamiliar with, particularly in economics.
If the OP wants to discuss agentbased modeling, then I think they should discuss agentbased modeling. I don’t think there is anything to be gained by calling agentbased models “complex systems”, or that taking a complexity science viewpoint adds any value.
Likewise, if you want to study networks, why not study networks? Again, adding the word “complex” doesn’t buy you anything.
As I said in my original comment, part of complexity science is good: this is the idea we can use maths and physics to modeling other systems. But this is hardly a new insight. Economists, biophysicists, mathematical biologists, computer scientists, statisticians, and applied mathematicians have been doing this for centuries. While sometimes siloing can be a problem, for the most part ideas flow fairly freely between these disciplines and there is a lot of crosspollination. When ideas don’t flow it is usually because they aren’t useful in the new field. (Maybe they rely on inappropriate assumptions, or are useful in the wrong regime, or answer the wrong questions, or are trivial and/or intractable in situations the new field cares about, or don’t give empirically testable results, or are already used by the new field in a slightly different way.) The “problem” of “siloing” that complexity science claims to want to solve is largely a mirage.
But of course, complexity science makes greater claims than just this. It claims to be developing some general insights into the workings of complex systems. As I’ve noted in my previous comment, these claims are at best just false and at worst completely vacuous. I think it is dangerous to support the kind of sophistry spouted by complexity scientists, for the same reason it is dangerous to support sophistry anywhere. At best it draws attention away from scientists who are making progress on real problems, and at worst it leads to piles of misleading and overblown hype.
My criticism is not analogous to the claim that “ML is just a rebranding of statistics”. After all, ML largely studies different topics and different questions to statistics. No, it would be as if we lived in a world without computers, and ML consisted of people waxing lyrically about how “computation” would solve learning, but then when asked how would just say basic (and sometimes incorrect) things about statistics.
As someone with a background in theoretical physics, I am very skeptical of the claims made by complexity science. At a metalevel I dislike being overly negative, and I don’t want to discourage people posting things that they think might be interesting or relevant on the forum. But I have seen complexity science discussed now by quite a few EAs rather credulously, and I think it is important to set the record straight.
On to the issues with complexity science. Broadly speaking, the problem with “complexity science” is that it is trying to study “complex systems”. But the only meaningful definition of “complex system” is a system that is not currently amenable to mathematical analysis. (Note this not always the definition that “complexity scientists” seem to actually use, since they like to talk about things like the Ising model which are not only well understood and long studied by physicists, but was actually exactly solved in 1944!) Trying to study the set of all “complex systems” is a bit like trying to study the set of animals that aren’t jellyfish, snails, lemurs or sting rays.
The concepts developed by “complexity scientists” are usually either wellknown and understood concepts from physics and mathematics (such as “phase transition”, “nonlinear”, “nonequilibrium”, “nonergodicity”, “criticality”, “selfsimilarity”) or else so hopelessly vague that as to be useless (“complexity”,”emergence”,”nonreducibility”,”selforganization”). If you want to learn about the former topics I would just recommend reading actual textbooks written by and aimed at physicists and mathematicians. For instance, I particularly like Nonlinear Dynamics and Chaos by Strogatz, if you want to understand dynamical systems, and David Tong’s lecture notes on Statistical Physics and Statistical Field Theory if you want to understand phasetransitions and critical phenomena.
Note that none of these concepts are new. Even the idea of applying these concepts to the social sciences is hardly novel, see this review for example. Note the lack of hype, and lack of buzz words.
Unfortunately, the research that I’ve seen under the moniker of “complexity science” uses these (precise, limited in scope) concepts both liberally and in a facile way. As a single example, let’s have a look at “scaling laws”. Scaling laws are symptoms of critical behavior, and, as already mentioned such critical phenomenon has long been studied by physicists. If you look at empirical datasets (such as those of city sizes, or how various biological features scale with the size of an animal) sometimes you also find powerlaws, and so naturally we might try to claim that these are also “critical systems”. But this plausible idea doesn’t seem to work in reality, for both for theoretical and empirical reasons.
The theoretical problem is that, pretty much all critical systems in physics require finetuning. For instance, you might have to dial the temperature and pressure of your gas to really specific values in order to see the behavior. There have been attempts to find models where we don’t need to finetune, and this is known as “selforganized criticality”, but these have basically all failed. Models which are often claimed to posses “selforganized criticality”, such as the forestfire model, do not actually have this behavior. On the empirical side, most purported “powerlaws” are, in practice, not obviously powerlaws. A long discussion of this can be found here but essentially the difficult is that it is hard in practice to distinguish powerlaws from other plausible distributions, such as lognormals.
If we want to talk about the hopelessly vague topics, well, there is really nothing much to be said about them, either by complexity scientists or by anyone else. To pick on “emergence”, for the moment, I think this post from The Sequences sums up nicely the emptiness of this word. There is notion of “emergence” that does appear in physics, known as “effective field theory”, which is very central to our current understanding of both particle and condensed matter physics. You can find this discussed in any quantum field theory textbook (I particularly like Peskin & Schroeder). For some reason I’ve never seen complexity scientists discuss it, which is strange, since this is the precise mathematical language physicists use to describe the emergence of largescale behavior in physical systems.
TLDR There is no secret sauce to studying complicated systems, and “complexity science” has not made any progress on this front. To paraphrase a famous quote, “The part that is good is not original, and the part that is correct is not original (and also misapplied).”
I don’t think so. The “immeasurability” of the future that Vaden has highlighted has nothing to do with the literal finiteness of the timeline of the universe. It has to do, rather, with the set of all possible futures (which is provably infinite). This set is immeasurable in the mathematical sense of lacking sufficient structure to be operated upon with a welldefined probability measure. Let me turn the question around on you: Suppose we knew that the timehorizon of the universe was finite, can you write out the sample space, $\sigma$algebra, and measure which allows us to compute over possible futures?
It certainly not obvious that the universe is infinite in the sense that you suggest. Certainly nothing is “provably infinite” with our current knowledge. Furthermore, although we may not be certain about the properties of our own universe, we can easily imagine worlds rich enough to contain moral agents yet which remain completely finite. For instance, you could image a cellular automata with a finite grid size and which only lasted for a finite duration.
However, perhaps the more important consideration is the in principle set of possible futures that we must consider when doing EV calculations, rather than the universe we actually inhabit, since even if our universe is finite we would never be able to convince our selves of this with certainty. Is it this set of possible futures that you think suffers from “immeasurability”?
I agree with your criticism of my second argument. What I should have instead said is a bit different. There are actions whose value decreases over time. For instance, all else being equal it is better to implement a policy which reduces existential risk sooner rather than later. Patient philanthropy makes sense only if either (a) you expect the growth of your resources to outpace the value lost by failing to act now, or (b) you expect cheaper opportunities to arise in the future. I don’t think there are great reasons to believe either of these is true (or indeed false, I’m not very certain on the issue).
There are two issues with knowledge, and I probably should have separated them more clearly. The more important one is that the kind of decisionrelevant information Will is asking for, that is, knowing when and how to spend your money optimally, may well just be unattainable. Optimal strategies with imperfect information probably look very different from optimal strategies with perfect information.
A secondary issue is that you actually need to generate the knowledge. I agree it is unclear whether Will is considering the knowledge problem as part of “direct” or “patient” philanthropy. But since knowledge production might eat up a large chunk of your resources, and since some types of knowledge may be best produced by trying to do direct work, plausibly the “patient philanthropist” ends up spending a lot of resources over time. This is not the image of patient philanthropy I originally had, but maybe I’ve been misunderstanding what Will was envisaging.
I can’t speak for why other people downvoted the comment but I downvoted it because the arguments you make are overly simplistic.
The model you have of philanthropy is that on an agent in each time period has the choice to either (1) invest or (2) spend their resources, and then getting a payoff depending on how influential″ the time is. You argue that the agent should then save until they reach the most influential″ time, before spending all of their resources at this most influential time.
I think this model is misleading for a couple of reasons. First, in the real world we don’t know when the most influential time is. In this case the agent may find it optimal to spend some of their resources at each time step. For instance direct philanthropic donations may give them a better understanding in the future of how influentialness varies (ie, if you don’t invest in AI safety researchers now, how will you ever know whether/when AI safety will be a problem?) You may also worry about “going bust”: if while you are being patient, an existential catastrophe (or value lockin) happens, then the patient longtermist looses their entire investment.
Perhaps one way to phrase how important this knowledge problem is to finding the optimal strategy is to think about it as analogous to owning stocks in a bubble. You strategy is that we should sell at the market peak, but we can’t do that if we don’t know when that will be.
Second, there are very plausible reasons why now may be the best time to donate. If we can spend money today to permanently reduce existential risk, or to permanently improve the welfare of the global poor, then it is always more valuable to do that action ASAP rather than wait. Likewise we plausibly get more value by working on biorisk, AI safety, or climate change today then we will in 20 years.
Third, the assumption of no diminishing marginal returns is illogical. We should be thinking about how EAs as a whole should spend their money as a whole. As an individual, I would not want to hold out for the most influential time if I thought everyone else was doing the same, and of course as a community we can coordinate.
I should also point out that, if I’ve understood your position correctly Carl, I agree with you. Given my second argument, that a prior we have something like 1 in a trillion odds of being the most influential, I don’t think we should end up concluding much about this.
Most importantly, this is because whether or not I am the most influential person is not actually relevant decision making question.
But even aside from this I have a lot more information about the world than just a prior odds. For instance, any longtermist has information about their wealth and education which would make them fairly exceptional compared to the average human that has ever lived. They also have reasonable evidence about existential risk this century and plausible (for some loose definition of plausible) ways to influence this. At the end of the day each of us still has low odds of being the most influential person ever, but perhaps with odds more in the 1 in 10 million range, rather than 1 in a trillion.
In his first comment Will says he prefers to frame it as “influential people” rather than “influential times”. In particular if you read his article (rather than the blog post), then in the end of section 5 he says he thinks it is plausible that the most influential people may live within the next few thousand years, so I don’t his odds that this century is the most influential can be very low (at a guess, one in a thousand?). I might be wrong though; I’d be very curious to know what Will’s prior is that the most influential person will be alive this century.
We have decided to extend the deadline to June 5th, if you’d still be to do advertise this in your forceasting newsletter that would be helpful!