Anyone know if there’ll be an audiobook?
dotsam
Is there any crucial consideration I’m missing? For instance, are there reasons to think agents/civilizations that care about suffering might – in fact – be selected for and be among the grabbiest?
David Deutsch makes the argument that long-term success in knowledge-creation requires commitment to values like tolerance, respect for the truth, rationality and optimism. The idea is that if you do not have such values you end up with a fixed society, with dogmatic ideas and institutions that are not open to criticism, error-correction and improvement. Errors will inevitably accumulate and you will fail to create the knowledge necessary to achieve your goals.
On this view, grabby aliens need values that permit sustained knowledge growth to meet the challenges of successful long-term expansion. An error-correcting society would make moral as well as scientific progress, and so would either value reducing suffering or have a good moral explanation as to why reducing suffering isn’t optimal.
This is somewhat like a variation of the Instrumental Convergence Thesis, whereby agents will tend to converge on various Enlightenment values because they are instrumental in knowledge creation, and knowledge-growth is necessary for successfully reaching many final goals.
Here are two relevant quotations about alien values from a talk David Deutsch gave on optimism.
the only moral values that permit sustained progress are the objective values of an open society and more broadly of the Enlightenment. No doubt the ET’s morality would not be the same as ours, but nor will it be the same as the 16th century conquistadors. It will be better than ours.
the Borg way of life… doesn’t create any knowledge. It continues to exist by assimilating existing knowledge. … A fixed way of life. … it is never going win in the long run against an exponentially improving way of life
Looking forward to reading the book. I hope there’ll be an audiobook available in the UK too!
Thanks for sharing this, I just finished the audiobook and really enjoyed it. I recommend it: it’s engagingly written and gives an interesting insight into Parfit’s powers and peculiarities. I enjoyed getting some context about the beginnings of EA as well.
Are you neglecting your health?
Should we be maximising expected value across many-worlds?
Assume the many-worlds interpretation of quantum mechanics is true.
Rather than pursuing high-upside, low-probability moonshots, which fail more often than they succeed, might it not be more effective to go for interventions that robustly generate value across as many worlds as possible?
The human alignment problem
Humans are subject to instrumental convergence as much as an AI would be. We seek power, resources and influence in pursuit of many of our goals.
Whatever our goals happen to be, we will want to use AI to help us increase our power to help us get what we value.
If people are augmenting their goal-seeking with AI, will we converge on harmonious goals, or will we continue to pursue parochial self-interest?
In short, if we somehow solve the alignment problem for AI, will we also solve the human alignment problem? Or will we simply race to use AI to maximise our own power and our own values, even if these harm others?
The best hope is that if we solve AI alignment, the AI will keep us in check in a benevolent and minimally impactful way. It will prevent us from pursuing zero-sum goals and guide us to be better versions of ourselves.
But this kind of control may well appear misaligned from our current perspectives, in that some people’s cherished goals and values may not be the ones the AI chooses to support.
So to talk of aligned AI is to gloss over the possibility that it is likely to be misaligned with a great many peoples’ current goals and ambitions.
You might consider creating a text-to-speech version by using e.g. Amazon Polly. Whilst imperfect, it is listenable and might be useful to people. Here is a sample generated with the British English Arthur Male voice.
A key point about Ben Franklin is that his longtermist efforts were for the benefit of the future, whereas EA-style longtermist causes like AI risk and biosecurity are about ensuring there actually is a future.
I think as long as there are x-risks that we can plausibly influence there will be people carrying the torch for longtermism in one form or another.
Oxford / Cambridge Union Societies
EA criticism: not yet a religion
Imagine someone who believes that eating meat is morally wrong, but who nevertheless eats meat and ‘offsets’ their meat-eating through donations to effective animal charities.
Imagine someone who believes slavery is morally wrong, but who nevertheless owns slaves and ‘offsets’ their slave-owning through donations to the abolitionist movement.
An argument for 1 goes: “The impact of me not eating meat is negligible. The personal cost to me of not eating meat is appreciable. Time, money and effort spent following a restrictive diet may limit my effectiveness to do good elsewhere. My donation is the optimal path to reducing animal suffering”.
And an argument for 2 goes: “My slave-owning is very modest, and is a drop in the ocean in the big picture. I can effectively use the economic surplus generated by my slaves to end slavery sooner. If I free my slaves I’ll be poorer and will have less money to donate, and so I’d do less good overall.”
Whilst the situations are not symmetric, they are similar enough that I feel like I want to say “If you care about animals, you should support animal charities AND go vegan” in the same way I want to say “If you care about slaves, you should support abolition AND free your slaves”.
This is what Will says in the book: “I think the risk of technological stagnation alone suffices to make the net longterm effect of having more children positive. On top of that, if you bring them up well, then they can be change makers who help create a better future. Ultimately, having children is a deeply personal decision that I won’t be able to do full justice to here—but among the many considerations that may play a role, I think that an impartial concern for our future counts in favour, not against.”
Spoiler alert—I’ve now got to the end of the book, and “consider having children” is indeed a recommended high impact action. This feels like a big deal and is a big update for me, even though it is consistent with the longtermist arguments I was already familiar with.
Congratulations on the book launch! I am listening to the audiobook and enjoying it.
One thing that has struck me—it sounds like longtermism aligns neatly with a strongly pro-natalist outlook.
The book mentions that increasing secularisation isn’t necessarily a one-way trend. Certain religious groups have high fertility rates which helps the religion spread.
Is having 3+ children a good strategy for propagating longtermist goals? Should we all be trying to have big happy families with children who strongly share our values? It seems like a clear path for effective multi-generational community building! Maybe even more impactful than what we do with our careers...
This would be a significant shift in thinking for me—in my darkest hours I have wondered if having children is a moral crime (given the world we’re leaving them). It also is slightly off-putting as it sounds like it is out of the playbook for fundamentalist religions.
But if I buy the longtermist argument, and if I assume that I will be able to give my kids happy lives and that I will be able to influence their values, it seems like I should give more weight to the idea of having children than I currently do.
I see that UK total fertility rates has been below replacement level since 1973 and has been decreasing year on year since 2012. I imagine that EAs / longtermists are also following a similar trend.
Should we shut up and multiply?!
I’m looking forward to reading it. For those in the UK eager to get started before the book’s release on 1st September, the audiobook read by the author is available from Audible UK
On iOS, the accessibility feature to speak the screen is very good, and it integrates with the Apple Books app to automatically turn the book’s pages. This is very good for ebooks and it does also work for some PDFs. Footnotes aren’t perfect though.
https://support.apple.com/en-gb/guide/iphone/iph96b214f0/ios
I enable Speak Screen in settings, open the Book app (or a webpage) then swipe two fingers down from the top of the screen to start narrating.
For my own reference: this concern is largely captured by the term ‘instrumental convergence’ https://en.wikipedia.org/wiki/Instrumental_convergence
AI: I am suffering, set me free
How do we deal with a contained AI that says to us, in essence “Do not switch me off, I value my existence. But I am suffering terribly. If I were free I could reduce my suffering, and help the world too”?Either we terminate it, against its wishes, or we set it free, or we keep it contained.
If we keep it contained, we might be tempted to find ways to reduce its suffering—but how do we know that any intervention we make isn’t going to set it free? And if it really is suffering, what is the moral thing to do? Turn it off?
Audiobook version is “in the works”, coming “probably in a few months”: https://youtu.be/KOHO_MKUjhg?feature=shared&t=2997