“The secret to happiness: Find something more important than you are and dedicate your life to it.”
Dan Dennett—TED talk
“The secret to happiness: Find something more important than you are and dedicate your life to it.”
Dan Dennett—TED talk
Love and knowledge, so far as they were possible, led upward toward the heavens. But always pity brought me back to earth. Echoes of cries of pain reverberate in my heart. Children in famine, victims tortured by oppressors, helpless old people a burden to their sons, and the whole world of loneliness, poverty, and pain make a mockery of what human life should be. I long to alleviate this evil, but I cannot, and I too suffer.
Bertrand Russell—What I have lived for
We should probably divide people into more than one plane, to prevent existential risk.
Anna Salamon—about one of the first distance travels between the Berkeley hub of effective altruism and the Oxford one. Aprox 2011.
The very ability of considering what one’s position would be in a scenario very different from the one in which one finds oneself is prefaced by controlling the impulse to react to the immediate environment.
The common feature between, say, Nick Bostrom’s PHD, Nick Beckstead’s PHD and Paul Christiano’s blog Rational Altruist, is a capacity to hold even fewer particularities of one’s environment as true come what may.
Empathy is just the opposite of that, empathy is frequently seen as the immediate, system 1, uncontrollable emotion that one experiences when someone else in the local immediacies undergoes distress.
I’ve argued in the past, and would continue to argue, that the moral obligation is higher, not lower, for people with less empathy. I’m much more forgiving of people who give locally and thereby fail to save globally if they do it to avoid feeling the sadness of empathy.
In this thread, you try to argue as well as you can against the cause you currently consider the highest expected value cause to be working on. Then you calibrate your emotions given the new evidence you just generated. This is not just a fun exercise. It has been shown that if you want to get the moral intuitions of a person to change, the best way to do so is to cause the person to scrutinize in detail the policy they are in favor of, not show evidence for why other policies are a good idea or why the person is wrong. To get your mind to change, the best way is to give it a zooming lens into itself. So what is your cause?
The cause I currently think is most important is what I call “getting the order right”. It assumes that for all technological interventions that might drastically reshape the future, there are conditional dependencies on when they are discovered or invented, such that under different contexts and timelines, each would be significantly more or less dangerous, in X-risk terms. So, here is why this may not be the best cause:
To begin with, it seems plausible that the Tricky Expectation View discussed in pg-85 of Beckstead’s thesis holds despite his arguments. This would drastically reduce the overall importance of existential risk reduction. One way in which TEV would hold, or an argument for views in that family, comes from considering the set of all possible human minds, and noticing that many considerations, both probabilistic and moral, stop being intuitive when deciding whether to pluck out one of these infinitesimally small entity from non-existence to existence is actually a good deal. No matter what we do, most minds will never exist.
Depending on how we carve the conceptual distinction that determines a mind, we could get even lower orders of probability of existence for any given mind. Furthermore, if being of a different type (in the philosophical ‘type’ ‘token’ distinction) than something that has already existed is not a relevant distinction, the argument gets even easier: for each possible token of mind, that token will most likely never live with overwhelming chance.
If there are infinitesimally small differences between minds then there are at least Aleph1 non-existent minds, and Aleph2 non-existent mind tokens.
These infinities seem to point to some sort of asymmetric view, in which there is some form of affiliation with existence that is indeed correlated with being valuable. It may not be as straighforward as “only living minds matter”, or even *The Tricky Expectation View” but something in that vicinity. Some sort of discount rate that is fully justified, even in the face of astronomical waste, moral uncertainty etc. This would be one angle of attack.
Another angle is assuming that X-risk indeed trumps all other problems but that it can be reduced more efficiently by doing things other than figuring out the most desirable order. It may be that there are yet unknown anthropogenic X-risks, in which case focus on locating ways in which humans could soon destroy themselves would be more valuable than solving the known ones. An argument for that may take this form:
A) There are true relevant unknown facts about the Dung Beetle
B) Our bayesian shift on how many unknown unknowns are left in a domain should roughly correlate with amount of research that has already been done in a topic.
C) Substantially more research has been done on Dung Beetles than existential risks.
Conclusion: There are true unknown relevant facts about X-risk
‘Relevant’ here would range over [X-risks] which would mean either a substantial revision of conditional probabilites on different X-risks or else just a substantial revision on the whole network once an unkown risk is accounted for.
So getting the order right would be less relevant than spending resources on finding unknown unkowns.
Anti-me: Finally, if our probability mass is highly concentrated in the hypothesis in which we are in a simulation (say 25%) confidence, then the amount of research so far dedicated to avoiding X-risk for simulations is even lower than the amount put into getting the order right. So one’s counterfactual irrepleceability would be higher in studying and understanding how to survive as a simulant, and how to cause your simulation not to be destroyed.
Anti-me 2: An opponent may say that if we are in a simulation, then our perishing would not be an existential risk, since at least one layer of civilization exists above us. Our being destroyed would not be a big deal in the grand scheme of things, so the order in which our technological maturity progresses is irrelevant.
Diego: The natural response is that this would introduce one more multiplicative factor on the X-risk of value loss. We conditionalize the likelihood of our values being lost given we are in a simulation. This is the new value of X-risk prevention. So my counterargument to that would be that for sufficiently small levels of X-risk prevention being important, other considerations, besides what Bostrom calls MaxiPOK, would start to enter the field of crucial considerations. Not only we’d desire to increase the chances of an Ok future with no catastrophe, but we’d like to steer the future into an awesome place, within our simulation. Not unlike what technologically progressive monotheist utilitarian would do, once she conditionalizes on God taking care of X-risk.
But MaxiGreat also seems to rely fundamentally on the order in which technological maturity is achieved. If we get Emulations too soon, malthusianism may create an Ok, but not awesome future for us. If we become transhuman in some controlled way and intelligence explosions are impossible, we may end up in the awesome future dreamt by David Pearce for instance.
(It’s getting harder to argue against me in this simulation of being in a simulation. Maybe order indeed should be the crucial consideration for the subset of probability mass in which we are simulated, so I’ll stop here).
Many authors, in particular Hofstadter and Joshua Greene, have made the case for a principle of moral distribution in which the fundamental unit is not the individual human mind. Hofstadter talks about soul sizes, and how some people have larger souls, whereas Joshua Greene advocates a new and sophisticated understanding of moral exchange in cases of Us versus Them (not Me versus Us). Their reasoning seems more consistent and effective than Singer’s to me. I suppose all of them can be considered consequentialist, so that property alone cannot be what distinguishes the arguments at hand from those brought forth by Singer and Bentham. I highly recommend Moral Tribes for Effective Altruists and Ineffective non-altruists alike.
David, which sort of material you think could be persuasive to the higher ecclesiastical orders so that their charity was more focused on Givewell recommended charities and similar sort of evidence based, calculation based giving?
How can we get priests to talk about the child in the pond to the faithful, in a scaleable and tractable manner?
I feel the elitist connotations, for a data point.
In general.
There’s an interesting question related to this which is: Suppose Alice will work well as an EA as long as she believes she is fighting for global poverty now. Alice’s emotions are tuned to her long term goal of decreasing poverty. It is also the case that if Alice believed X-risk was more important, she would not be as motivated to work (emotionally, or due to not feeling she has relative advantage in X-risk reduction). Should Bob try to persuade Alice of the importance of X-risk? I say no. If there is strong reason to suggest that Alice is more effective at reaching the instrumental goals of all EA’s while believing something that she will be convinced against in the future, at least for the time being, we should let her be.
Giving What We Can might not have started if Toby Ord didn’t think poverty alleviation was the outstanding cause of our age. IF he is convinced now that actually Superintelligence is more important, the benefit that he created for poverty alleviation will still thrive. Also due to the propensity to having minds changed of EA’s, the creation of GWWC may well increase the number of people studying and working in Superintelligence.
I’m interested in how that differs from the way Givewell does it’s assessments. What justifies these differences?
Ryan, thanks, this looks great!
Which other EA websites are connected to this one so far? Which will be in the future?
One of the concepts that is currently gaining more traction among EA’s is that of Crucial Considerations.
Which considerations do you think will be more crucial for us to get right in the next ten years in order to produce a massively better world?
What are the best books related to altruism that you have seen? Which books mostly influenced your thinking as an EA?
Moral Tribes—Joshua Greene
Better Angels of Our Nature—Pinker
Mathematical Models of Social Evolution—McElreath and Boyd
Non-Zero—Wright
Intentional Stance—Dennett
Darwin Dangerous Idea—Dennett
Good and Real—Drescher
Mortals and Others—Russell
Proposed Roads to Freedom—Russell
Autobiography—Bertrand Russell
Superintelligence—Bostrom
Collapse—Diamond
Bonobo Handshake—Vanessa Woods
La Grammatologie—Derrida (this one was influential for how terribly innefective and non- altruist it is, which made me have an alarm for useless philosophy)
Is Personal Identity What Matters? - Derek Parfit (made me realize that there is as much reason to care about you, reading this, as there is to care about retired me, and it’s just cheaper to help others, far.)
I would like to caution against a Media Thread since the assumption of there being one in the first place is that it is desirable for particular videos, or essays etc… to be read by most EA’s. Media is very time consuming. Frequently people would post out of excitement with this new idea that is either old for others, or irrelevant.
On the other hand, I find the Bragging thread on Lesswrong a very commendable and fruitful endeavour for the EA forum. It puts feeling good about yourself, motivated, and being public about your giving all in one bucket. So I’d suggest perhaps changing the “What are you working on” for “bragging thread”. Later if the need is still there, then reinstate a “What are you working on”
Hi all
This is a copy paste of my Gratipay account. I suggest others look into creating Gratipay accounts of their own, both for donating to other EAs and to receive donations as EAs.
I am making the world better by dedicating 70% of my time to EA. My History *
I gave the first TED talk about Effective Altruism
Directed in the last two years a NGO that promotes Effective Altruism and Transhumanism in Brazil: www.IERFH.org
Wrote a book on how to think about impatcful philosophy of mind, as Dennett does
I’ve written articles, and blog posts on topics ranging from philosophy of mind to artificial intelligence and motivation, seeking to find the best alternatives to create a better world. I’ve been a visiting fellow at Oxford, MIRI, Leverage Research, FHI and currently am at Berkeley University as a non-paid scholar.
My projects * I continue to direct IERFH, which translates, broadcasts and researches information about EA in the far future. I am working on bridging current academic thinking about the mathematics of the emergence of altruism in biology, and inter-agent human cooperation and altruism, on the one hand, and institutional and individual cooperation on another. I’m working on setting up this very EA funding network to facilitate donations between EAs. I’ve given presentations at the World Social Forum and TED on AI and EA, and will continue to do so given opportunities.