I work at Open Philanthropy, doing research for the biosecurity and pandemic preparedness team. Before that I was a research scholar at FHI, and before that did a PhD in physics.
djbinder
We have decided to extend the deadline to June 5th, if you’d still be to do advertise this in your forceasting newsletter that would be helpful!
Thanks for pointing this out, but unfortunately we cannot shift the submission deadline.
I agree with with your first question, the utilitarian needs a measure (they don’t need a separate utility function from their measure, but there may be other natural measures to consider in which case you do need a utility function).
With respect to your second question, I think you can either give up on the infinite cases (because you think they are “metaphysically” impossible, perhaps) or you demand that a regularization must exist (because with this the problem is “metaphysically” underspecified). I’m not sure what the correct approach is here, and I think it is an interesting question to try and understand this in more detail. On the latter case you have to give up impartiality, but only in a fairly benign way, and that our intuitions about impartiality are probably wrong here (analogous situations occur in physics with charge conservation, as I noted in another comment).
With respect to your third question, I think it is likely that problems with no regularization are non-sensical. This is not to say that all problems involving infinities are themselves non-sense, nor to say that correct choice of regularization is obvious.
As an intuition pump maybe we can consider cases that don’t involve infinities. Say we are in (rather contrived world) in which utility is literally a function of space-time, and we integrate to get the total utility. How should I assign utility for a function which has support on a non-measurable set? Should I even think such a thing is possible? After all, the existence of non-measuarable sets follows not from ZF alone, but requires also the axiom of choice. As another example, maybe my utility function depends on whether or not the continuum hypothesis is true or false. How should I act in this case?
My own guess is that such questions likely have no meaningful answer, and I think the same is true for questions involving infinities without specified ways to operationalize the infinities. I think it would be odd to give up on the utilitarian dream due to unmeasurable sets, and that the same is true for ill-defined infinities.
I think you are right about infinite sets (most of the mathematicians I’ve talked to have had distinctly negative views about set theory, in part due to the infinities, but my guess is that such views are more common amongst those working on physics-adjacent areas of research). I was thinking about infinities in analysis (such as continuous functions, summing infinite series, integration, differentiation, and so on), which bottom out in some sort of limiting process.
On the spatially unbounded universe example, this seems rather analogous to me to the question of how to integrate functions over the same space. There are a number of different sets of functions which are integrable over , and even for some functions which are not integrable over there are natural regularization schemes which allows the integral to be defined. In some cases these regularizations may even allow a notion of comparing different “infinities”, as in cases where the integral diverges as the regularizer is taken to zero one integral may strictly dominate the other. When dealing with situations in ethics, perhaps we should always be restricting to these cases? There are a lot of different choices here, and it isn’t clear to me what the correct restriction is, but it seems plausible to me that some form of restriction is needed. Note that such a restrictions include ultrafinitism, as an extreme case, but in general allows a much richer set of possibilities.
Expansionism is neceessarily incomplete, it assumes that the world has a specific causal structure (ie, one that is locally that of special relativity) which is an empirical observation about our universe rather than a logically necessary fact. I think it is plausible that, given the right causal assumptions, expansionism follows (at least for individual observers making decisions that respect causality).
As an aside, while neutrality-violations are a necessary consequence of regularization, a weaker form of neutrality is preserved. If we regularize with some discounting factor so that everything remains finite, it is easy to see that “small rearrangments” (where the amount that a person can move in time is finite) do not change the answer, because the difference goes to zero as . But “big rearrangments” can cause differences that grow with . Such situations do arise in various physical situations, and are interpretted as changes to boundary conditions, whereas the “small rearrangments” manifestly preserve boundary conditions and manifestly do not cause problems with the limit. (The boundary is most easily seen by mapping the infinite interval sequence onto a compact interval, so that “infinity” is mapped to a finite point. “Small rearrangments” leave infinity unchanged, whereas “large” ones will cause a flow of utility across infinity, which is how the two situations are able to give different answers.)
I think what is true is probably something like “neverending process don’t exist, but arbitrarily long ones do”, but I’m not confident. My more general claim is that there can be intermediate positions between ultrafinitism (“there is a biggest number”), and any laissez faire “anything goes” attitude, where infinities appear without care or scrunity. I would furthermore claim (but on less solid ground), that the views of practicing mathematicians and physicists falls somewhere in here.
As to the infinite series examples you give, they are mathematically ill-defined without giving a regularization. There is a large literature in mathematics and physics on the question of regularizing infinite series. Regularization and renormalization are used through physics (particular in QFT), and while poorly written textbooks (particularly older ones) can make this appear like voodoo magic, the correct answers can always be rigorously be obtained by making everything finite.
For the situation you are considering, a natural regularization would be to replace your sum with a regularized sum where you discount each time step by some discounting factor . Physically speaking, this is what would happen if we thought the universe had some chance of being destroyed at each timestep; that is, it can be arbitrarily long-lived, yet with probability 1 is finite. You can sum the series and then take and thus derive a finite answer.
There may be many other ways to regulate the series, and it often turns out that how you regulate the series doesn’t matter. In this way, it might make sense to talk about this infinite universe without reference to a specific limiting process, but rather potentially with only some weaker limiting process specification. This is what happens, for instance, in QFT; the regularizations don’t matter, all we care about are the things that are independent of regularization, and so we tend to think of the theories as existing without a need for regularization. However, when doing calculations it is often wise to use a specific (if arbitrary) regularization, because it guarantees that you will get the right answer. Without regularizations it is very easy to make mistakes.
This is all a very long-winded way to say that there are at least two intermediate views one could have about these infinite sequence examples, between the “ultrafinitist” and the “anything goes”:
-
The real world (or your priors) demands some definitive regularization, which determines the right answer. This would be the case if the real world had some probability of being destroyed, even if it is arbitrarily small.
-
Maybe infinite situations like the one you described are allowed, but require some “equivalence class of regularizations” to be specified in order to be completely specified. Otherwise the answer is as indeterminant as if you’d given me the situation without specifiying the numbers. I think this view is a little weirder, but also the one that seems to be adopted in practice by physicists.
-
I think Section XIII is too dismissive of the view that infinities are not “real”, conflating it with ultrafinitism. But the sophisticated version of this view is that infinities should only be treated as “idealized limits” of finite processes. This is, as far as understand, the default view amongst practicing mathematicians and physicists. If you stray from it, and use infinities without specifying the limiting process, it is very easy to produce paradoxes, or at least, indeterminancy in the problem. The sophisticated view, then, is not that infinities don’t exist, but that, since they only exist as limiting cases of finite processes. One must always specify the limiting process, and in doing so any paradoxes or indeterminancies will disappear.
As Jaynes’ summarizes in Chapter 15 of Probability: The Logic of Science:
[P]aradoxes caused by careless dealing with infinite sets or limits can be mass-produced by the following simple procedure:
(1) Start from a mathematically well-defined situation, such as a finite set, a normalized probability distribution, or a convergent integral, where everything is well-behaved and there is no question about what is the correct solution.
(2) Pass to a limit – infinite magnitude, infinite set, zero measure, improper pdf, or some other kind – without specifying how the limit is approached.
(3) Ask a question whose answer depends on how the limit was approached.
- 26 Dec 2022 20:14 UTC; 7 points) 's comment on Why I am happy to reject the possibility of infinite worlds by (
In principal I agree, although in practice there are other mitigating factors which means it doesn’t seem to be that relevant.
This is partly because the 10^52 number is not very robust. In particular, once you start postulating such large numbers of future people I think you have to take the simulation hypothesis much more seriously, so that the large size of the far future may in fact be illusory. But even on a more mundane level we should probably worry that achieving 10^52 happy lives might be much harder than it looks.
It is partly also because at a practical level the interventions long-termists consider don’t rely on the possibility of 10^52 future lives, but are good even over just the next few hundred years. I am not aware of many things that have smaller impacts and yet still remain robustly positive, such that we would only pursue them due to the 10^52 future lives. This is essentially for the reasons that asolomonr gives in their comment.
Attempts to reject fanatacism necessarily lead to major theoretical problems, as described for instance here and here.
However, questions about fanatacism are not that relevant for most questions about x-risk. The x-risks of greatest concern to most long-termists (AI risk, bioweapons, nuclear weapons, climate change) all have reasonable odds of occurring within the next century or so, and even if we care only about humans living in the next century or so we would find that these are valuable to prevent. This is mostly a consequence of the huge number of people alive today.
The Great Big Book of Horrible Things is a list of the 100 worst man-made events in history, many of which fit your definition of moral catastrophe.
Practices (rather than events) that might fit your definition include
slavery, in its many forms
judicial torture
Apartheid, and racial segregation in other countries such as the USA and Australia
Thanks for the reply Rory! I think at this point it is fairly clear where we agree (quantitative methods and ideas from maths and physics can be helpful in other disciplines) and where we disagree (whether complexity science has new insights to offer, and whether there is a need for an interdisciplinary field doing this work separate from the ones that already exist), and don’t have any more to offer here past my previous comments. And I appreciate your candidness noting that most complexity scientists don’t mention complexity or emergence much in their published research; as is probably clear I think this suggests that, despite their rhetoric, they haven’t managed to make these concepts useful.
I do not think the SFI, at least judging from their website, and from the book Scale which I read a few years ago, is a good model of public relations that EAs should try to emulate. They make grand claims about what they have achieved which seems to me to be out of proportion to their actual accomplishments. I’m curious to hear what you think the great success stories of SFI are. The one I know the most about, the scaling laws, I’m pretty skeptical of for the reasons outlined previously. I had a look at their “Evolution of Human Languages” program, and it seems to be fringe research by the standards of mainstream comparative linguistics. But there could well be success stories that I am unfamiliar with, particularly in economics.
If the OP wants to discuss agent-based modeling, then I think they should discuss agent-based modeling. I don’t think there is anything to be gained by calling agent-based models “complex systems”, or that taking a complexity science viewpoint adds any value.
Likewise, if you want to study networks, why not study networks? Again, adding the word “complex” doesn’t buy you anything.
As I said in my original comment, part of complexity science is good: this is the idea we can use maths and physics to modeling other systems. But this is hardly a new insight. Economists, biophysicists, mathematical biologists, computer scientists, statisticians, and applied mathematicians have been doing this for centuries. While sometimes siloing can be a problem, for the most part ideas flow fairly freely between these disciplines and there is a lot of cross-pollination. When ideas don’t flow it is usually because they aren’t useful in the new field. (Maybe they rely on inappropriate assumptions, or are useful in the wrong regime, or answer the wrong questions, or are trivial and/or intractable in situations the new field cares about, or don’t give empirically testable results, or are already used by the new field in a slightly different way.) The “problem” of “siloing” that complexity science claims to want to solve is largely a mirage.
But of course, complexity science makes greater claims than just this. It claims to be developing some general insights into the workings of complex systems. As I’ve noted in my previous comment, these claims are at best just false and at worst completely vacuous. I think it is dangerous to support the kind of sophistry spouted by complexity scientists, for the same reason it is dangerous to support sophistry anywhere. At best it draws attention away from scientists who are making progress on real problems, and at worst it leads to piles of misleading and overblown hype.
My criticism is not analogous to the claim that “ML is just a rebranding of statistics”. After all, ML largely studies different topics and different questions to statistics. No, it would be as if we lived in a world without computers, and ML consisted of people waxing lyrically about how “computation” would solve learning, but then when asked how would just say basic (and sometimes incorrect) things about statistics.
As someone with a background in theoretical physics, I am very skeptical of the claims made by complexity science. At a meta-level I dislike being overly negative, and I don’t want to discourage people posting things that they think might be interesting or relevant on the forum. But I have seen complexity science discussed now by quite a few EAs rather credulously, and I think it is important to set the record straight.
On to the issues with complexity science. Broadly speaking, the problem with “complexity science” is that it is trying to study “complex systems”. But the only meaningful definition of “complex system” is a system that is not currently amenable to mathematical analysis. (Note this not always the definition that “complexity scientists” seem to actually use, since they like to talk about things like the Ising model which are not only well understood and long studied by physicists, but was actually exactly solved in 1944!) Trying to study the set of all “complex systems” is a bit like trying to study the set of animals that aren’t jellyfish, snails, lemurs or sting rays.
The concepts developed by “complexity scientists” are usually either well-known and understood concepts from physics and mathematics (such as “phase transition”, “non-linear”, “non-equilibrium”, “non-ergodicity”, “criticality”, “self-similarity”) or else so hopelessly vague that as to be useless (“complexity”,”emergence”,”non-reducibility”,”self-organization”). If you want to learn about the former topics I would just recommend reading actual textbooks written by and aimed at physicists and mathematicians. For instance, I particularly like Nonlinear Dynamics and Chaos by Strogatz, if you want to understand dynamical systems, and David Tong’s lecture notes on Statistical Physics and Statistical Field Theory if you want to understand phase-transitions and critical phenomena.
Note that none of these concepts are new. Even the idea of applying these concepts to the social sciences is hardly novel, see this review for example. Note the lack of hype, and lack of buzz words.
Unfortunately, the research that I’ve seen under the moniker of “complexity science” uses these (precise, limited in scope) concepts both liberally and in a facile way. As a single example, let’s have a look at “scaling laws”. Scaling laws are symptoms of critical behavior, and, as already mentioned such critical phenomenon has long been studied by physicists. If you look at empirical datasets (such as those of city sizes, or how various biological features scale with the size of an animal) sometimes you also find power-laws, and so naturally we might try to claim that these are also “critical systems”. But this plausible idea doesn’t seem to work in reality, for both for theoretical and empirical reasons.
The theoretical problem is that, pretty much all critical systems in physics require fine-tuning. For instance, you might have to dial the temperature and pressure of your gas to really specific values in order to see the behavior. There have been attempts to find models where we don’t need to fine-tune, and this is known as “self-organized criticality”, but these have basically all failed. Models which are often claimed to posses “self-organized criticality”, such as the forest-fire model, do not actually have this behavior. On the empirical side, most purported “power-laws” are, in practice, not obviously power-laws. A long discussion of this can be found here but essentially the difficult is that it is hard in practice to distinguish power-laws from other plausible distributions, such as log-normals.
If we want to talk about the hopelessly vague topics, well, there is really nothing much to be said about them, either by complexity scientists or by anyone else. To pick on “emergence”, for the moment, I think this post from The Sequences sums up nicely the emptiness of this word. There is notion of “emergence” that does appear in physics, known as “effective field theory”, which is very central to our current understanding of both particle and condensed matter physics. You can find this discussed in any quantum field theory textbook (I particularly like Peskin & Schroeder). For some reason I’ve never seen complexity scientists discuss it, which is strange, since this is the precise mathematical language physicists use to describe the emergence of large-scale behavior in physical systems.
TLDR There is no secret sauce to studying complicated systems, and “complexity science” has not made any progress on this front. To paraphrase a famous quote, “The part that is good is not original, and the part that is correct is not original (and also misapplied).”
I don’t think so. The “immeasurability” of the future that Vaden has highlighted has nothing to do with the literal finiteness of the timeline of the universe. It has to do, rather, with the set of all possible futures (which is provably infinite). This set is immeasurable in the mathematical sense of lacking sufficient structure to be operated upon with a well-defined probability measure. Let me turn the question around on you: Suppose we knew that the time-horizon of the universe was finite, can you write out the sample space, $\sigma$-algebra, and measure which allows us to compute over possible futures?
It certainly not obvious that the universe is infinite in the sense that you suggest. Certainly nothing is “provably infinite” with our current knowledge. Furthermore, although we may not be certain about the properties of our own universe, we can easily imagine worlds rich enough to contain moral agents yet which remain completely finite. For instance, you could image a cellular automata with a finite grid size and which only lasted for a finite duration.
However, perhaps the more important consideration is the in principle set of possible futures that we must consider when doing EV calculations, rather than the universe we actually inhabit, since even if our universe is finite we would never be able to convince our selves of this with certainty. Is it this set of possible futures that you think suffers from “immeasurability”?
I agree with your criticism of my second argument. What I should have instead said is a bit different. There are actions whose value decreases over time. For instance, all else being equal it is better to implement a policy which reduces existential risk sooner rather than later. Patient philanthropy makes sense only if either (a) you expect the growth of your resources to outpace the value lost by failing to act now, or (b) you expect cheaper opportunities to arise in the future. I don’t think there are great reasons to believe either of these is true (or indeed false, I’m not very certain on the issue).
There are two issues with knowledge, and I probably should have separated them more clearly. The more important one is that the kind of decision-relevant information Will is asking for, that is, knowing when and how to spend your money optimally, may well just be unattainable. Optimal strategies with imperfect information probably look very different from optimal strategies with perfect information.
A secondary issue is that you actually need to generate the knowledge. I agree it is unclear whether Will is considering the knowledge problem as part of “direct” or “patient” philanthropy. But since knowledge production might eat up a large chunk of your resources, and since some types of knowledge may be best produced by trying to do direct work, plausibly the “patient philanthropist” ends up spending a lot of resources over time. This is not the image of patient philanthropy I originally had, but maybe I’ve been misunderstanding what Will was envisaging.
I can’t speak for why other people down-voted the comment but I down-voted it because the arguments you make are overly simplistic.
The model you have of philanthropy is that on an agent in each time period has the choice to either (1) invest or (2) spend their resources, and then getting a payoff depending on how influential″ the time is. You argue that the agent should then save until they reach the most influential″ time, before spending all of their resources at this most influential time.
I think this model is misleading for a couple of reasons. First, in the real world we don’t know when the most influential time is. In this case the agent may find it optimal to spend some of their resources at each time step. For instance direct philanthropic donations may give them a better understanding in the future of how influentialness varies (ie, if you don’t invest in AI safety researchers now, how will you ever know whether/when AI safety will be a problem?) You may also worry about “going bust”: if while you are being patient, an existential catastrophe (or value lock-in) happens, then the patient long-termist looses their entire investment.
Perhaps one way to phrase how important this knowledge problem is to finding the optimal strategy is to think about it as analogous to owning stocks in a bubble. You strategy is that we should sell at the market peak, but we can’t do that if we don’t know when that will be.
Second, there are very plausible reasons why now may be the best time to donate. If we can spend money today to permanently reduce existential risk, or to permanently improve the welfare of the global poor, then it is always more valuable to do that action ASAP rather than wait. Likewise we plausibly get more value by working on biorisk, AI safety, or climate change today then we will in 20 years.
Third, the assumption of no diminishing marginal returns is illogical. We should be thinking about how EAs as a whole should spend their money as a whole. As an individual, I would not want to hold out for the most influential time if I thought everyone else was doing the same, and of course as a community we can coordinate.
I should also point out that, if I’ve understood your position correctly Carl, I agree with you. Given my second argument, that a prior we have something like 1 in a trillion odds of being the most influential, I don’t think we should end up concluding much about this.
Most importantly, this is because whether or not I am the most influential person is not actually relevant decision making question.
But even aside from this I have a lot more information about the world than just a prior odds. For instance, any long-termist has information about their wealth and education which would make them fairly exceptional compared to the average human that has ever lived. They also have reasonable evidence about existential risk this century and plausible (for some loose definition of plausible) ways to influence this. At the end of the day each of us still has low odds of being the most influential person ever, but perhaps with odds more in the 1 in 10 million range, rather than 1 in a trillion.
In his first comment Will says he prefers to frame it as “influential people” rather than “influential times”. In particular if you read his article (rather than the blog post), then in the end of section 5 he says he thinks it is plausible that the most influential people may live within the next few thousand years, so I don’t his odds that this century is the most influential can be very low (at a guess, one in a thousand?). I might be wrong though; I’d be very curious to know what Will’s prior is that the most influential person will be alive this century.
I’m confused as to what your core outside-view argument is Will. My initial understanding of it was the following:
(A1) We are in a potentially large future with many trillions of trillions of humans
(A2) Our prior should be that we are randomly chosen amongst all living humans
then we conclude that
(C) We should have extremely low a prior odds of being amongst the most influential
To be very crudely quantitative about this, multiplying the number of humans on earth by the number of stars in the visible universe and the lifetime of the Earth, we quickly end up with estimates of ~1e38 total humans, and so priors on the order of ~1e-38.As Buck points out, this argument doesn’t work unless you are willing to also accept with similarly extremely likelihood that the fact we appear to be very early humans is wrong. Otherwise the sheer weight of 1e38 pushes you extremely strongly to the conclusion that either (A1) is false or that we are almost certainly in a simulation.
Perhaps a somewhat different argument is closer to what you actually think. Here I’ve tried to frame the argument in a way that I think both you and Buck would find reasonable:
(A1′) A prior it is plausible that the most influential human is early. For simplicity, let’s say we have a 10% prior that the most influential human lives while the majority of humanity is still all on earth.
(A2′) The number of humans that will be alive up to the end of this period is plausible on the scale of a 100 billion people.
(A3′) Our evidence that humanity is restricted to one planet is incontrivertible (ie, no simulation)
We now conclude that
(C’) We should have low, but not absurdly astronomically low odds of being amongst the most influential humansNote that compared to the previous argument, the a prior odds on being the most influential person is now 1e-10, so our earliness essentially increases our belief that we are the most influential by something like 1e28. But of course a 1-in-a-100 billion prior is still pretty low, and you don’t think our evidence is sufficiently strong to signficantly reduce it.
Do you agree with this argument Will? Or have a I misunderstood you?
I think this is too bearish on the economic modeling. If you want to argue that climate change could pose some risk of civilization collapse, you have to argue that some pathway exists from climate to a direct impact on society that prevents the society from functioning. When discussing collapse scenarios from climate most people (I think) are envisiging food, water, or energy production becoming so difficult that this causes further societal failures. But the economic models strongly suggest that the perturbations on these fronts are only “small”, so that we shouldn’t expect these to lead to a collapse. I think in this regime we should trust the economic modeling. If instead the economic models were finding really large effects (say, a 50% reduction in food production), then I would agree that the economic models were no longer reliable. At this point society would be functioning in a very different regime from present, so we wouldn’t expect the economic modeling to be very useful.
You could argue that the economic models are missing some other effect that could cause collapse, but I think it is difficult to tell such a story. The story that climate change will increase the number of wars is fairly speculative, and then you would have to argue that war could cause collapse, which is implausible excepting nuclear war. I think there is something to this story, but would be surprised if climate change were the predominant factor in whether we have a nuclear war in the next century.
Famine induced mass migration also seems very unlikely to cause civilization collapse. It would be very easy with modern technology for a wealthy country to defend itself against arbitrarily large groups of desperate, starving refuges. Indeed, to my knowledge there has been no analogy for a famine->mass migration->collapse of neighbouring society chain of events in the historic record, despite many horrific famines. I haven’t investigated this quesiton in detail however, and would be very interested if such events have in fact occurred.