Hmm true, I think I agree that this means the dynamics I describe matter less in expectation (because the positional goods-oriented people will be quite marginal in terms of using the resources of the universe).
OscarD
Good point re aesthetics perhaps mattering more, and about people dis-valuing inequality and therefore not wanting to create a lot of moderately good lives lest they feel bad about having amazing lives and controlling vast amounts of resources.
Re “But I don’t think …” in your first paragraph, I am not sure what if anything we actually disagree about. I think what you are saying is that there are plenty of resources in our galaxy, and far more beyond, for all present people to have fairly arbitrarily large levels of wealth. I agree, and I am also saying that people may want to keep it roughly that way, rather than creating heaps of people and crowding up the universe.
Oh nice, I hadn’t seen that one, thanks!
Nice, good idea and well implemented!
In terms of wastewater being good for getting samples from lots of people at once and not needing ethics clearance, but being worse for respiratory pathogens, how feasible is airborne environmental DNA sampling? I have never looked into it, I just remember hearing someone give a talk about their work on this, I think related to this paper: https://www.sciencedirect.com/science/article/pii/S096098222101650X
I assume it is just hard to get the quantity of nucleic acids we would want from the air.
Flagging this for @Conrad K. - this seems like perhaps a better version of what you were considering building last year I think? If you have time you might have useful thoughts/suggestions.
I played around with the simulator a bit but didn’t find anything too counterintuitive. I noticed various minor suboptimal things, depending on what you want to do with the simulator some of these may not be worth changing:
I found having many values in relative abundance box for nasal swabs a bit confusing and harder to manage as a user. Why not just specify a distribution with some parameters rather than list lots of possible values drawn from that distribution?
The line is not monotonic as it should be here, seemingly because the simulation hits 30% of the population and then stops. Maybe rather than have the line go back to 0, just stop it when it hits 30%, or have it plateau at 30%?
There were some issues with the sizing of the graph for me. I am using Chrome on Windows 11. At 100% zoom part of the x-axis label and the y-axis numbers are cut off:
And the problem becomes worse if for whatever reason you run lots of scenarios, where the whole bottom half of the graph disappears:
Thanks for writing this up! Have you spoken to Christian Ruhl or anyone else at Founder’s Pledge about this work? I think FP would be interested in and benefit from this.
I downvoted because there are lots of questions lumped in together without enough motivation and cohesion for my liking, and compared to e.g. the moral weights project the engagement with these subtle issues feels more flippant than serious.
Makes sense, sounds good!
Nice post! Re the competitive pressures, this seems especially problematic in long-timelines worlds where TAI is really hard to build, as (toy model) if company A spends all its cognitive assets on capabilities (including generating profit to fund this research), while company B spends half its cognitive assets at any given time on safety work with no capabilities overflows, then if there is a long time over which this exponential growth continues, company A will likely reach the lead even if it starts well behind. Whereas if there is a relatively smaller amount of cognitive assets ever deployed before TAI, safety-oriented companies being in the lead should be the dominant factor and safety-ignoring companies wouldn’t be able to catch up even by ‘defecting’.
Exciting! Why the relocation from Switzerland to the UK? The fact that there are more EA/X-risk projects already in London seems like both a pro (more networking and community opportunities, better access to mentors) and a con (less differentiation with other projects like ERA and MATS, less neglected than mainland Europe fellowships).
Feel free to not reply if you deliberately don’t want to make this reasoning public.
My guess now of where we most disagree is regarding the value of a world where AIs disempower humanity and go onto have a vast technologically super-advanced, rapidly expanding civilisation. I think this would quite likely be ~0 value since we don’t understand consciousness at all really, and my guess is that AIs aren’t yet conscious and if we relatively quickly get to TAI in the current paradigm they probably still won’t be moral patients. As a sentientist I don’t really care whether there is a huge future if humans (or something sufficiently related to humans e.g. we carefully study consciousness for a millennium and create digital people we are very confident have morally important experiences to be our successors) aren’t in it.
So yes I agree frontier AI models are where the most transformative potential lies, but I would prefer to get there far later once we understand alignment and consciousness far better (while other less important tech progress continues in the meantime).
To red-team a strawman of your (simulated) argument: what about the Pascallian and fanatical implications across evidentially cooperating large worlds? I think we need some Bayesian, anthropic reasoning, lots of squiggle notebooks, and perhaps a cross-cause cost-effectiveness model to get to the bottom of this!
Thanks, interesting idea, I think I mostly disagree and would like to see AI progress specifically slowed/halted while continuing to have advances in space exploration, biology, nuclear power, etc and that if we later get safe TAI we won’t have become too anti-technology/anti-growth to expand a lot. But I hadn’t thought about this before and there probably is something to this, I just think it is most likely swamped by the risks from AI. It is a good reason to be careful in pause AI type pitches to be narrowly focused on frontier AI models rather than tech and science in general.
I suppose when I think about pro-expansion things I would like to see they are only really ones that do not (IMO) increase x-risks—better institutions, more pro-natalism, space exploration, maybe cognitive enhancement.
That makes sense, yes perhaps there are some fanaticism worries re my make-the-future large approach even more so than x-risk work, and maybe I am less resistant to fanaticism-flavoured conclusions than you. That said I think not all work like this need be fanatical—e.g. improving international cooperation and treaties for space exploration could be good in more frames (and bad is some frames you brought up, granted).
I don’t know lots about it, but I wonder if you prefer more of a satisficing decision theory where we want to focus on getting a decent outcome rather than necessarily the best (e.g. Bostrom’s ‘Maxipok’ rule). So I think not wholeheartedly going for maximum expected value isn’t a sign of irrationality, and could reflect different, sound, decision approaches.
Thanks for this really thoughtful engagement! I expected this would not be a take particularly to your liking, but your pushback is stronger than I thought, this is useful to hear. Perhaps I failed to realise how controversial and provocative these ideas would be after playing with them myself and with a few relatively similar people. Onto the substance:
That makes sense to me that the analogy is a bit weak, I think I mostly agree. I think the strongest part of the analogy to me is less the NIMBYs themselves and more who is politically empowered (a smaller group that is better coordinated—and actually existing—than the larger group of possible beneficiaries). Maybe I should have foregrounded this more actually.
Re space expansion/colonisation, yeah I don’t have much idea about how all this would work, so it is intuition-based. It is interesting I think how people have such different intuitive reactions to space expansion, I think roughly along the lines of pro-market, pro-”progress”, technologist, capitalist types (partially including me) pattern match space exploration to other things they like and intuitively like. Whereas environmentalists, localists, post-colonialists, social justice-oriented people, degrowthers etc (also partially including me, but to a lesser extent probably) are intuitively pretty opposed. But I think it is reasonable to at least be worried about the socio-political consequences of a space focus—not at all sure how it would play out and I am probably somewhat more optimistic than you, but yes your worries seem plausible.
I completely agree there are far too few people working on x-risks, and that there should be far more, and collapse is dangerous and scary, and that we are very much not out of the woods and things could go terribly. I suppose it is the nature of being scope-sensitive and prioritiationist though that something being very important and neglected and moderately tractable (like x-risk work) isn’t always enough for it to be the ‘best’ (granted re your previous post that this may not makes sense). I’m not sure if this is what you had in mind, but I think there is some significance to risk-averse decision-making principles, where maybe avoiding extinction is especially important even compared to building (an even huger) utopia. So I think I have less clear views on what practically is best for people like me to be doing (for now I will continue to focus on catastrophic and existential risks). But I still think in principle it could be reasonable to focus on making a great future even larger and greater, even if that is unlikely. Another, perhaps tortured, analogy: you have founded a company, and could spend all your time trying to avoid going bankrupt and mitigating risks, but maybe some employee should spend some fraction of their time thinking about best-case scenarios and how you could massively expand and improve the company 5 years down the line if everything else falls into place nicely.
As a process note, I think these discussions are a lot easier and better to have when we are (I think) both confident the other person is well-meaning and thoughtful and altruistic, I think otherwise it would be a lot easier to dismiss prematurely ideas I disagree with or find uncomfortable. So in other words I’m really glad I know you :)
My understanding is you are unsupportive of earning-to-give. I agree the trappings of expensive personal luxuries are both substantively bad (often) and poor optics. But the core idea that some people are very lucky and have the opportunity to earn huge amounts of money which they can (and should) then donate, and that this can be very morally valuable, seems right to me. My guess is that regardless of your critiques of specific charities (bednets, deworming, CATF) you still think there are morally important things to do with money. So what do you think of ETG—why is the central idea wrong (if you indeed think that)?
I was disappointed GiveDirectly wasn’t mentioned given that seems to be more what he would favour. The closing anecdote about the surfer-philosopher donating money to Bali seems like a proto-GiveDirectly approach but presumably a lot less efficient without the infrastructure to do it at scale.
Cosmic NIMBYs and the Repugnant Conclusion
Thanks for sharing, it sucks that you went through this (and sucks that the moths went through this :( ). As uncomfortable as thinking about these topics is, I am glad to be part of a community of people who take ethics seriously and try to act with compassion and consideration. Let’s hope market forces take effect and enough people inquiring about low-suffering ways to kill insects creates a market for companies to offer this :)
Nice!
I think this makes good sense as a toy theoretical model, and updates me some way towards these conclusions, but not very far because this sort of armchair theorising (while valuable and fun) is hard to get accurate for something as messy and empirical as this, as you note. So I think if someone were to investigate this further the key steps would be to:
look at empirical literature, or conduct primary research, on pleasure/pain symmetry and whether this holds (maybe this would be intractable though)
do some more involved population dynamics modelling, e.g. with a system of ODEs for food, prey juveniles, prey adults, predator juveniles and predator adults (I think this would be very tractable, but less crux-y)
That said, I think contraception as an intervention mode stands on its own without these more speculative theoretical arguments (unless wild animals have quite positive lives on average, such that preventing them from coming into existence is bad, but this seems unlikely).
NIce post!
This seems like a key point to me, that it is hard to get good evidence on. The red stripes are rather benign, so we are in luck in a world like that. But if the AI values something in a more totalising way (not just satisficing with a lot of x’s and red stripes being enough, but striving to make all humans spend all their time making x’s and stripes) that seems problematic for us. Perhaps it depends how ‘grabby’ the values are, and therefore how compatible with a liberal, pluralistic, multipolar world.