Research analyst at Open Philanthropy. All opinions are my own.
Lukas Finnveden
The alien will use the same reasoning and conclude that humans are more valuable (in expectation) than aliens. That’s weird.
Different phrasing: Consider a point in time when someone hasn’t yet received introspective evidence about what human or alien welfare is like, but they’re soon about to. (Perhaps they are a human who has recently lost all their memories, and so don’t remember what pain or pleasure or anything else of-value is like.) They face a two envelope problem about whether to benefit an alien, who they think is either twice as valuable as a human, equally valuable as a human, or half as valuable as a human. At this point they have no evidence about what either human or alien experience is like, so they ought to be indifferent between switching or not. So they could be convinced to switch to benefitting humans for a penny. Then they will go have experiences, and regardless of what they experience, if they then choose to “pin” the EV-calculation to their own experience, the EV of switching to benefitting non-humans will be positive. So they’ll pay 2 pennies to switch back again. So they 100% predictably lost a penny. This is irrational.
Many posts this week reference RP’s work on moral weights, which came to the surprising-to-most “Equality Result”: chicken experiences are roughly as valuable as human experiences.
I thought that post used the “equality result” as a hypothetical and didn’t claim it was correct.
When first introduced:
Suppose that these assumptions lead to the conclusion that chickens and humans can realize roughly the same amount of welfare at any given time. Call this “the Equality Result.” The key question: Would the Equality Result alone be a good reason to think that one or both of these assumptions is mistaken?
At the end of the post:
Finally, let’s be clear: we are not claiming that the Equality Result is correct. Instead, our claim is that given the assumptions behind the Moral Weight Project (and perhaps even without them), we shouldn’t flinch at “animal-friendly” results.
I think the right post to reference readers to is probably this one where chicken experiences are 1⁄3 of humans’. (Which isn’t too far off from 1x, so I don’t think this undermines your post.)
Nice, I feel compelled by this.
The main question that remains for me (only paranthetically alluded to in my above comment) is:
Do we get something that deserves to be called an “anthropic shadow” for any particular, more narrow choice of “reference class”, and...
can the original proposes of an “anthropic shadow” be read as proposing that we should work with such reference classes?
I think the answer to the first question is probably “yes” if we look at a reference class that changes over time, something like R_t = “people alive at period t of development in young civilizations’ history”.
I don’t know about the answer to the second question. I think R_t seems like kind of a wild reference class to work with, but I never really understood how reference classes were supposed to be chosen for SSA, so idk what SSA’s proponents thinks is reasonable vs. not.
With some brief searches/skim in the anthropic shadow paper… I don’t think they discuss the topic in enough depth that they can be said to have argued for such a reference class, and it seems like a pretty wild reference class to just assume. (They never mention either the term “reference class” or even any anthropic principles like SSA.)
Under typical decision theory, your decisions are a product of your beliefs and by the utilities that you assign to different outcomes. In order to argue that Jack and Jill ought to be making different decisions here, it seems that you must either:
Dispute the paper’s claim that Jack and Jill ought to assign the same probabilities in the above type of situations.
Be arguing that Jack and Jill ought to be making their decisions differently despite having identical preferences about the next round and identical beliefs about the likelihood that a ball will turn out to be red.
Are you advancing one of these claims? If (1), I think you’re directly disagreeing with the paper for reasons that don’t just come down to how to approach decision making. If (2), maybe say more about why you propose Jack and Jill make different decisions despite having identical beliefs and preferences?
Anthropic shadow effects are one of the topics discussed loosely in social settings among EAs (and in general open-minded nerdy people), often in a way that assumes the validity of the concept
FWIW, I think it’s rarely a good idea to assume the validity of anything where anthropics plays an important role. Or decision theory (c.f. this). These are very much not settled areas.
This sometimes even applies when it’s not obvious that anthropics is being invoked. I think Dissolving the Fermi Paradox and Grabby aliens both rely on pretty strong assumption about anthropics that are easy for readers to miss. (Tristan Cook does a good job of making the anthropics explicit, and exploring a wide range, in this post.)
Oh, also, re the original paper, I do think that even given SSA, Teru’s argument that Jack and Jill have equivalent epistemic perspectives is correct. (Importantly: As long as Jack and Jill uses the same SSA reference classes, and those reference classes don’t treat Jack and Jill any differently.)
Since the core mechanism in my above comment is the correlation between x2 and the total number of observers, I think Jill the Martian would also arrive at different Pr(A) depending on whether she was using SSA or SIA.
(But Teru doesn’t need to get into any of this, because he effectively rejects SSA towards the end of the section “Barking Dog vs The Martians” (p12-14 of the pdf). Referring to his previous paper Doomsday and objective chances.)
But this example relies on there just being one planet. If there are >1 planets, each with two periods, we are back to having an anthropic shadow again.
Let’s consider the case with 2 planets. Let’s call them x and y.
According to SSA:
Given A, there are 4 different possibilities, each with probability 1/4:
No catastrophe on either planet.
Catastrophe on x.
Catastrophe on y.
Catastrophe on both.
Let’s say you observe yourself to be alive at time-step 2 on planet x.
Pr(x2|A) = 1/4*1/4 + 1/4*0 + 1/4*1/3 + 1/4*0 ~= 0.146
Given B, the probabilities are instead:
No catastrophe on either planet: (9/10)^2
Catastrophe on x: 9/10*1/10
Catastrophe on y: 1/10*9/10
Catastrophe on both: 1/10*1/10
Pr(x2|B) = (9/10)^2*1/4 + 9/10*1/10*0 + 1/10*9/10*1/3 + 1/10*1/10*0 ~= 0.233
Pr(A|x2) = Pr(x2|A)Pr(A)/Pr(x2) = Pr(x2|A)Pr(A)/[Pr(x2|A)*0.5+Pr(x2|B)*0.5)] ~= 0.146*0.5/[0.146*0.5+0.233*0.5] ~= 0.385.
According to SIA:
Here, we can directly compute Pr(A|x2).
All x2 observers are:
Where A is true—Where no catastrophes happen. Probability: 0.5*1/4
Where A is true—Where there’s only a catastrophe on y. Probability: 0.5*1/4
Where B is true—Where no catastrophes happen. Probability: 0.5*(9/10)^2
Where B is true—Where there’s only a catastrophe on y. Probability: 0.5*9/10*1/10
The total sum of x2 measure in worlds where A is true is 0.5*1/4 + 0.5*1/4 = 0.25.
The total sum of x2 measure is 0.5*1/4 + 0.5*1/4 + 0.5*(9/10)^2 + 0.5*9/10*1/10 = 0.7
Pr(A|x2) = 0.25/0.7 ~= 0.357.
The difference would be somewhat larger with >2 planets. (But would never be very large. Unless you changed the SSA reference classes so that you’re e.g. only counting observers at period 2.)
Also: The mechanism of action here is the correlation between there being a survivor alive at x2 and there being a greater number of total observers in your reference class. There are multiple ways to break this:
If you have a universe with both A planets and B planets (i.e. each planet has a 50% probability of being an A planet and a 50% probability of being a B planet) then there will once again not be any difference between SIA and SSA. (Because then there’s no correlation between x2 and the total number of observers.)
Alternatively, if there’s a sufficiently large “background population” of people in your reference class whose size is equally large regardless of whether there’s a survivor at x2, then the correlation between x2 and the total number of observers can become arbitrarily small, and so the difference between SIA and SSA can become arbitrarily small.
Overall: I don’t think SSA-style anthropic shadows of any significant size are real. Because I think SSA is unreasonable, and because I think SSA with small/restrictive reference classes is especially unreasonable. And with large reference classes, it seems unlikely to me that there are large correlations between our possible historic demise and the total number of observers. (For reasons like the above two bullet points.)
What’s important in “AI for epistemics”?
I think Elon buying Twitter was a mess, but community notes is one of the best improvements to the info ecosystem I have seen.
Note that, according to wikipedia:
The program launched in 2021 and became widespread on X in 2023. Initially shown to U.S. users only, notes were popularized in March 2022 over misinformation in the Russian invasion of Ukraine followed by COVID-19 misinformation in October. Birdwatch was then rebranded to Community Notes and expanded in November 2022.
Elon bought Twitter in October 2022, after the program had already been online for a while. I don’t know whether any important details changed after Elon joining, nor whether twitter already had plans to expand the program. So I don’t know how much credit Elon should get here vs. the previous owners of Twitter.
And here’s the full list of the 57 speakers we featured on our website
That’s not right: You listed these people as special guests — many of them didn’t do a talk. Importantly, Hanania didn’t. (According to the schedule.)
I just noticed this. And it makes me feel like “if someone rudely seeks out controversy, don’t list them as a special guest” is such a big improvement over the status quo.
Hanania was already not a speaker. (And Nathan Young suggests that last year, this was partly a conscious decision rather than him not just feeling like he wanted to give a talk.)
If you just had open ticket sales and allowed Hanania to buy a ticket (or not) just like everyone else, then I think that would be a lot better in the eyes of most people who don’t like that Hanania is listed as a special guest (including me). My guess would be that it’s a common conference policy to “Have open ticket sales, and only refuse people if you think they might actively break-norms-and-harm-people during the events (not based on their views on twitter)”. (Though I could be off-base here — I haven’t actually read many conferences’ policies.)
I think people who are concerned about preserving the “open expression of ideas” should basically not care who gets to be listed as a “special guest”. This has roughly no consequence on their ability to express their ideas. It’s just a symbolic gesture of “we think this person is cool, and we think that you should choose whether to go to our event partly based on whether you also think this person is cool”. It’s just so reasonable to exclude someone from a list like that even just on the basis of “this person is rude and unnecessarily seeks out controversy and angering people”. (Which I think basically everyone agrees is true for e.g. Hanania.)
Here’s one line of argument:
Positive argument in favor of humans: It seems pretty likely that whatever I’d value on-reflection will be represented in a human future, since I’m a human. (And accordingly, I’m similar to many other humans along many dimensions.)
If AI values where sampled ~randomly (whatever that means), I think that the above argument would be basically enough to carry the day in favor of humans.
But here’s a salient positive argument in favor of why AIs’ values will be similar to mine: People will be training AIs to be nice and helpful, which will surely push them towards better values.
However, I also expect people to be training AIs for obedience and, in particular, training them to not disempower humanity. So if we condition on a future where AIs disempower humanity, we evidentally didn’t have that much control over their values. This signiciantly weakens the strength of the argument “they’ll be nice because we’ll train them to be nice”.
In addition: human disempowerment is more likely to succeed if AIs are willing to egregiously violate norms, such a by lying, stealing, and killing. So conditioning on human disempowerment also updates me somewhat towards egregiously norm-violating AI. That makes me feel less good about their values.
Another argument is that, in the near term, we’ll train AIs to act nicely on short-horizon tasks, but we won’t particularly train them to deliberate and reflect on their values well. So even if “AIs’ best-guess stated values” are similar to “my best-guess stated values”, there’s less reason to belive that “AIs’ on-reflection values” are similar to “my on-reflection values”. (Whereas the basic argument of my being similar to humans still work ok: “my on-reflection values” vs. “other humans’ on-reflection values”.)
Edit: Oops, I accidentally switched to talking about “my on-reflection values” rather than “total utilitarian values”. The former is ultimately what I care more about, though, so it is what I’m more interested in. But sorry for the switch.
There might not be any real disagreement. I’m just saying that there’s no direct conflict between “present people having material wealth beyond what they could possibly spend on themselves” and “virtually all resources are used in the way that totalist axiologies would recommend”.
What’s the argument for why an AI future will create lots of value by total utilitarian lights?
At least for hedonistic total utilitarianism, I expect that a large majority of expected-hedonistic-value (from our current epistemic state) will be created by people who are at least partially sympathetic to hedonistic utilitarianism or other value systems that value a similar type of happiness in a scope-sensitive fashion. And I’d guess that humans are more likely to have such values than AI systems. (At least conditional on my thinking that such values are a good idea, on reflection.)
Objective-list theories of welfare seems even less likely to be endorsed by AIs. (Since they seem pretty niche to human values.)
There’s certainly some values you could have that would mainly be concerned that we got any old world with a large civilization. Or that would think it morally appropriate to be happy that someone got to use the universe for what they wanted, and morally inappropriate to be too opinionated about who that should be. But I don’t think that looks like utilitarianism.
- 25 Apr 2024 17:41 UTC; 4 points) 's comment on Eric Neyman’s Shortform by (LessWrong;
I find it plausible that future humans will choose to create much fewer minds than they could. But I don’t think that “selfishly desiring high material welfare” will require this. Just the milky way has enough stars for each currently alive human to get an entire solar system each. Simultaneously, intergalactic colonization is probably possible (see here) and I think the stars in our own galaxy is less than 1-in-a-billion of all reachable stars. (Most of which are also very far away, which further contributes to them not being very interesting to use for selfish purposes.)
When we’re talking about levels of consumption that are greater than a solar system, and that will only take place millions of years in the future, it seems like the relevant kind of human preferences to be looking at is something like “aesthetic” preference. And so I think the relevant analogies are less that of present humans optimizing for their material welfare, but perhaps more something like “people preferring the aesthetics of a clean and untouched universe (or something else: like the aesthetics of a universe used for mostly non-sentient art) over the aesthetics of a universe which is packed with joy”.
I think your point “We may seek to rationalise the former [I personally don’t want to live in a large mediocre world, for self-interested reasons] as the more noble-seeming latter [desire for high average welfare]” is the kind of thing that might influence this aesthetic choice. Where “I personally don’t want to live in a large mediocre world, for self-interested reasons” would split into (i) “it feels bad to create a very unequal world where I have lots more resources than everyone else”, and (ii) “it feels bad to massively reduce the amount of resources that I personally have, to that of the average resident in a universe packed full with life”.
compared to MIRI people, or even someone like Christiano, you, or Joe Carlsmith probably have “low” estimates
Christiano says ~22% (“but you should treat these numbers as having 0.5 significant figures”) without a time-bound; and Carlsmith says “>10%” (see bottom of abstract) by 2070. So no big difference there.
I agree that having a prior and doing a bayesian update makes the problem go away. But if that’s your approach, you need to have a prior and do a bayesian update — or at least do some informal reasoning about where you think that would lead you. I’ve never seen anyone do this. (E.g. I don’t think this appeared in the top-level post?)
E.g.: Given this approach, I would’ve expected some section that encouraged the reader to reflect on their prior over how (dis)valuable conscious experience could be, and asked them to compare that with their own conscious experience. And if they were positively surprised by their own conscious experience (which they ought to have a 50% chance of being, with a calibrated prior) — then they should treat that as crucial evidence that humans are relatively more important compared to animals. And maybe some reflection on what the author finds when they try this experiment.
I’ve never seen anyone attempt this. My explanation for why is that this doesn’t really make any sense. Similar to Tomasik, I think questions about “how much to value humans vs. animals having various experiences” comes down to questions of values & ethics, and I don’t think that these have common units that it makes sense to have a prior over.