# Lukas_Finnveden

Karma: 725
• The term “most important century” pretty directly suggests that this century is unique, and I assume that includes its unusually large amount of x-risk (given that Holden seems to think that the development of TAI is both the biggest source of x-risk this century and the reason for why this might be the most important century).

Holden also talks specifically about lock-in, which is one way the time of perils could end.

See e.g. here:

It’s possible, for reasons outlined here, that whatever the main force in world events is (perhaps digital people, misaligned AI, or something else) will create highly stable civilizations with “locked in” values, which populate our entire galaxy for billions of years to come.

If enough of that “locking in” happens this century, that could make it the most important century of all time for all intelligent life in our galaxy.

I want to roughly say that if something like PASTA is developed this century, it has at least a 25% chance of being the “most important century” in the above sense.

• The page for the Century Fellowship outlines some things that fellows could do, which are much broader than just university group organizing:

When assessing applications, we will primarily be evaluating the candidate rather than their planned activities, but we imagine a hypothetical Century Fellow may want to:

• Lead or support student groups relevant to improving the long-term future at top universities

• Develop a research agenda aimed at solving difficult technical problems in advanced deep learning models

• Start an organization that teaches critical thinking skills to talented young people

• Run an international contest for tools that let us trace where synthetic biological agents were first engineered

• Conduct research on questions that could help us understand how to to make the future go better

• Establish a publishing company that makes it easier for authors to print and distribute books on important topics

Partly this comment exists just to give readers a better impression of the range of things that the century fellowship could be used for. For example, as far as I can tell, the fellowship is currently one of very few options for people who want to pursue fairly independent longtermist research and who want help with getting work authorization in the UK or US.

But I’m also curious if you have any comments on the extent to which you expect the century fellowship to take on community organizers vs researchers vs ~entrepeneurs. (Is the focus on community organizing in this post indicative, or just a consequence of the century fellowship being mentioned in a post that’s otherwise about community organizing?)

• I’m not saying it’s infinite, just that (even assuming it’s finite) I assign non-0 probability to different possible finite numbers in a fashion such that the expected value is infinite. (Just like the expected value of an infinite st petersburg challenge is infinite, although every outcome has finite size.)

• I simply don’t believe that infinities exist, and even though 0 isn’t a probability, I reject the probabilistic argument that any possibility of infinity allows them to dominate all EV calculations.

Problems with infinity doesn’t go away just because you assume that actual infinities don’t exist. Even with just finite numbers, you can face gambles that have infinite expected value, if increasingly good possibilities have insufficiently rapidly diminishing probabilities. And this still causes a lot of problems.

(I also don’t think that’s an esoteric possibility. I think that’s the epistemic situation we’re currently in, e.g. with respect to the amount of possible lives that could be created in the future.)

Also, as far as I know (which isn’t a super strong guarantee) every nice theorem that shows that it’s good to maximize expected value assumes that possible utility is bounded in both directions (for outcomes with probability >0). So there’s no really strong reason to think that it would make sense to maximize expected welfare in an unbounded way, in the first place.

• 10^12 might be too low. Making up some numbers: If future civilizations can create 10^50 lives, and we think there’s an 0.1% chance that 0.01% of that will be spent on ancestor simulations, then that’s 10^43 expected lives in ancestor simulations. If each such simulation uses 10^12 lives worth of compute, that’s a 10^31 multiplier on short-term helping.

• I agree. Anecdotally, among people I know, I’ve found aphantasia to be more common among those who are very mathematically skilled.

(Maybe you could have some hypothesis that aphantasia tracks something slightly different than other variance in visual reasoning. But regardless, it sure seems similar enough that it’s a bad idea to emphasize the importance of “shape rotating”. Because that will turn off some excellent fits.)

• But note the hidden costs. Climbing the social ladder can trade of against building things. Learning all the Berkeley vibes can trade of against, eg., learning the math actually useful for understanding agency.

This feels like a surprisingly generic counterargument, after the (interesting) point about ladder climbing. “This could have opportunity costs” could be written under every piece of advice for how to spend time.

In fact, it applies less to this posts than to most advice on how to spend time, since the OP claimed that the environment caused them to work harder.

(A hidden cost that’s more tied to ladder climbing is Chana’s point that some of this can be at least somewhat zero-sum.)

• By the way, as an aside, the final chapter here is that Protect our Future PAC went negative in May—perhaps a direct counter to BoldPAC’s spending. (Are folks here proud of that? Is misleading negative campaigning compatible with EA values?)

I wanted to see exactly how misleading these were. I found this example of an attack ad, which (after some searching) I think cites this, this, this, and this. As far as I can tell:

• The first source says that Salinas “worked for the chemical manufacturers’ trade association for a year”, in the 90s.

• The second source says that she was a “lobbyist for powerful public employee unions SEIU Local 503 and AFSCME Council 75 and other left-leaning groups” around 2013-2014. The video uses this as a citation for the slide “Andrea Salinas — Drug Company Lobbyist”.

• The third source says that insurers’ drug costs rose by 23% between 2013-2014. (Doesn’t mention Salinas.)

• The fourth source is just the total list of contributors to Salina’s campaigns, and the video doesn’t say what company she supposedly lobbied for that gave her money. The best I can find is that this page says she lobbied for Express Scripts in 2014, who is listed as giving her $250. So my impression is that the situation boils down to: Salinas worked for a year for the chemical manufacturers’ trade association in the 90s, had Express Scripts as 1 out of 11 clients in 2014 (although the video doesn’t say they mean Express Scripts, or provide any citation for the claim that Salinas was a drug lobbyist in 2013/​2014), and Express Scripts gave her$250 in 2018. (And presumably enough other donors can be categorised as pharmaceutical to add up to $18k.) So yeah, very misleading. (Also, what’s up with companies giving and campaigns accepting such tiny amounts as$250? Surely that’s net-negative for campaigns by enabling accusations like this.)

• (1) maybe doom should be disambiguated between “the short-lived simulation that I am in is turned of”-doom (which I can’t really observe) and “the basement reality Earth I am in is turned into paperclips by an unaligned AGI”-type doom.

Yup, I agree the disambiguation is good. In aliens-context, it’s even useful to disambiguate those types of doom from “Intelligence never leaves the basement reality Earth I am on”-doom. Since paperclippers probably would become grabby.

• When I model the existence of simulations like us, SIA does not imply doom (as seen in the marginalised posteriors for in the appendix here).

It does imply doom for us, since we’re almost certainly in a short-lived simulation.

And if we condition on being outside of a simulation, SIA also implies doom for us, since it’s more likely that we’ll find ourselves outside of a simulation if there are more basement-level civilizations, which is facilitated by more of them being doomed.

It just implies that there weren’t necessarily a lot of doomed civilizations in the basement-level universe, many basement-level years ago, when our simulators were a young civilization.

• It’s table 3 I think you want to look at. For fatigue and other long covid symptoms, belief that you had covid has a higher odds ratio than does confirmed covid

That’s exactly what we should expect if long covid is caused by symptomatic covid, and belief-in-covid is a better predictor of symptomatic covid than positive-covid-test. (The latter also picks up asymptomatic covid, so it’s a worse predictor of symptomatic covid.)

• The future’s ability to affect the past is truly a crucial consideration for those with high discount rates. You may doubt whether such acausal effects are possible, but in expectation, on e.g. an ultra-neartermist view, even a 10^-100 probability that it works is enough, since anything that happened 100 years ago is >>10^1000 times as important as today is, with an 80%/​day discount rate.

Indeed, if we take the MEC approach to moral uncertainty, we can see that this possibility of ultra-neartermism + past influence will dominate our actions for any reasonable credences. Perhaps the future can contain 10^40 lives, but that pales in comparison to the >>10^1000 multiplier we can get by potentially influencing the past.

• I think the title of this post doesn’t quite match the dialogue. Most of the dialogue is about whether additional good lives is at least somewhat good. But that’s different from whether each additional good life is morally equivalent to a prevented death. The former seems more plausible than the latter, to me.

Separating the two will lead to some situations where a life is bad to create but also good to save, once started. That seems more like a feature than a bug. If you ask people in surveys, my impression is that some small fraction of people say that they’d prefer to not have been born and that some larger fraction of people say that they’d not want to relive their life again — without this necessarily implying that they currently want to die.