Thanks for your comment. It makes me realize I failed to properly communicate some of my ideas. Hopefully this comment can elucidate them.
Better democracy won’t help much with EA causes if people generally don’t care about them
More democracy could even make things worse (see 10% Less Democracy). But much better democracy wouldn’t because it would do things like:
Disentangling values from expertise (ex.: predicting which global catastrophes are most likely shouldn’t be done democratically, but rather with expert systems such as prediction markets)
Representing the unrepresented (ex.: having a group representing the interest of non-human animals during elections)
we choose EA causes in part based on their neglectedness
I was claiming that with the best system, all causes would be equally (not) neglected. Although this wouldn’t be entirely true as I conceded in the previous comment because people have different fundamental values.
Causes have to be made salient to people, and that’s a role for advocacy to play,
I think most causes wouldn’t have to be made salient to people if we had a great System. You can have something like (with a lot of details still to be worked out): 1) have a prediction market to predict what values existing people would vote on in the future, and 2) have a prediction market to predict which interventions will fulfill those values the most. And psychological research and education helping people to introspect is a common good that would likely be financed by such a System. Also, if ‘advocacy’ is about a way of enforcing cooperative social norms, then this would be fixed by solving coordination problems.
But maybe you want to declare ideological war, and aim to overwrite people’s terminal values with yours, hence partly killing their identity in the process. If that’s what you mean by ‘advocacy’, then you’re right that this wouldn’t be captured by the System, and ‘philanthropy’ would still be needed. But protecting ourselves against such ideological attacks is a social good: it’s good for everyone individually to be protected. I also think it’s likely better for everyone (or at least a supermajority) to have this protection for everyone rather than for no one. If we let ideological wars go on, there will likely be an evolutionary process that will select for ideologies adapted to their environment, which is likely to be worse from most currently existing people’s moral standpoint than if there had been ideological peace. Robin Hanson has written a lot about such multipolar outcomes.
Maybe pushing for altruism right now is a good patch to fund social good in the current System. And maybe current ideological wars against weaker ideologies is rational. But I don’t think it’s the best solution in the long run.
Also relevant: Against moral advocacy.
I’m not sure you can or should try to capture this all without philanthropy
I proposed arguments for and against capturing philanthropy in the article. If you have more considerations to add, I’m interested.
Also, I don’t think inequality will ever be fixed, since there’s no well-defined target. People will always argue about what’s fair, because of differing values.
I don’t know. Maybe we settle on the Schelling point of splitting the Universe among all political actors (or in some other ways), and this gets locked-in through apparatuses like Windfall clauses (for example), and even if some people disagree with them, they can’t change them. Although they could still decide to redistribute their own wealth in a way that’s more fair according to their values, so in that sense you’re right that their would still be a place for philanthropy.
Some issues may remain extremely expensive to address [...] so people as a group may be unwilling to fund them, and that’s where advocates and philanthropists should come in.
I guess it comes down to inequality. Maybe someone thinks it’s particularly unfair that someone has a rare disease, and so is willing to spend more resources on it than what the collective wants. And so they would inject more resources in a market for this value.
Another example: maybe the Universe is split equally among everyone alive at the point of the intelligence explosion, but some people will want to redistribute some of their wealth to fulfill the preferences of dead people, or will want to reward those that helped make this happen.
What is “just the right amount”?
I was thinking something like the amount one would spend if everyone else would spent the same amount than them, repeating this process for everyone and summing all those quantities. This would just be resource spent on a value; how to actually use the resources for that value would be decided by some expert systems.
And how do you see the UN coming to fund it if they haven’t so far?
The UN would need to have more power. But I don’t know how to make this happen.
If you got rid of Open Phil and other private foundations, redistributed the money to individuals proportionally, even if earmarked for altruistic purposes, and solved all coordination problems, do you think (longtermist) AI safety would be more or less funded than it is now?
At this point we would have formed a political singleton. I think a significant part of our entire world economy would be structured around AI safety. So more.
How else would you see (longtermist) AI safety make up for Open Phil’s funding through political mechanisms, given how much people care about it?
As mentioned above, using something like Futarchy.
Creating a perfect system would be hard, but I’m proposing moving in that direction. I updated that even with a perfect system, there would still be some people wanting to redistribute their wealth, but less so than currently.
Good point. My implicit idea was to have the money in an independent trust, so that the “punishment” is easier to enforce.
I wonder how people in the EA community compare with people in general, notably controlling for income. I also wonder how much is given in the form of a reduced salary or volunteering, and how that compares to people in general.
cross-post means copy-pasting the entire article in the post on the EA forum
Thanks for our comment, it helped me clarified my model to myself.
especially politically unempowered moral beings
It proposes a lot of different voting systems to avoid (human) minorities being oppressed.
I could definitely see them develop systems to include future / past people.
But I agree they don’t seem to tackle beings not capable (at least in some ways) of representing themselves, like non-human animals and reinforcement learners. Good point. It might be a blank spot for that community(?)
or many of the EA causes
Such as? Can you see other altruistic use of philanthropy beside coordination problems, politically empowering moral beings, and fixing inequality? Although maybe that assumes preference utilitarianism. With pure positive hedonistic utilitarianism, wanting to created more happy people is not really a coordination problem (to the extent most people are not positive hedonistic utilitarians), nor about empowering moral beings (ie. happiness is mandatory), nor about fixing inequalities (nor an egoist preference).
Maybe it can make solving them easier, but it doesn’t offer full solutions to them all, which seems to be necessary for making philanthropy obsolete.
Oh, I agree solving coordination failures to finance public goods doesn’t solve the AI safety problem, but it solves the AI safety funding problem. In that world, the UN would arguably finance AI safety at just the right amount, so there would be no need for philanthropists to fund the cause. In that world, 1$ at the margin of any public good would be just as effective. And egoists motivations to work in any of those field would be sufficient. Although maybe there are market failures that aren’t coordination failures, like information asymmetries, in which case there might still be a used for personal sacrifices.
Mind-readers as a neglected life extension strategy
Last updated: 2020-03-30
Status: idea to integrate in a longer article
Death is bad
Lifelogging is a bet worth taking as a life extension strategy
It seems like a potentially really important and neglected intervention is improving mind readers as this is by far the most important part of our experience that isn’t / can’t be captured at the moment.
We don’t actually need to be able to read the mind right now, just to be able to record the mind with sufficiently high resolution (plausibly along text and audio recording to be able to determine which brain patterns correspond to what kind of thoughts).
Assuming we had extremely good software, how much could we read minds with our current hardware? (ie. how much is it worth recording your thoughts right now?)
How inconvenient would it be? How much would it cost?
Ask on Metaculus some operationalisation of the first question
Category: Intervention idea
Epistemic status: speculative; arm-chair thinking; non-expert idea; unfleshed idea
Proposal: Have nuclear powers insure each other that they won’t nuke each other for mutually assure destruction (ie. destroying my infrastructure means you will destroy your economy). Not accepting an offered of mutual insurances should be seen as extremely hostile and uncooperative, and possible even be severely sanctioned internationally.
This gave me the idea of The Bullshit Awards
The Bullshit Awards
Proposal: Give prizes to people spotting / blowing the whistle on papers bullshitting its readers, and explaining why.
Details: There could be a Bullshit Alert Prize for the one blowing the whistle, and a Bullshit Award for the one having done the bullshitting. This would be similar to the Darwin Awards in that you don’t want to be the source of such an award.
Example: An analysis that could have won this is Why We Sleep — a tale of institutional failure.
Note: I’m not sure whether that’s a good way to go about fixing that problem. Is shaming a useful tool?
Harry Potter meme related to this post ^^: https://www.facebook.com/groups/OMfCT/permalink/2502301776751392/
two of the main blockers for predictions markets seem to be 1) legality, and 2) subsidies. seems like this state of emergency / immediate potential benefit of prediction markets might be a good time to address 1), and maybe even 2)
Moved from my short form; created on 2020-02-28
Group to discuss information hazard
Context: Sometimes I come up with ideas that are very likely information hazard, and I don’t share them. Most of the time I come up with ideas that are very likely not information hazard.
Problem: But also, sometimes, I come up with ideas that are in-between, or that I can’t tell whether I should share them are not.
Solution hypothesis: I propose creating a group with which one can share such ideas to get external feedback on them and/or about whether they should be shared more widely or not. To reduce the risk of information leaking from that group, the group could:
be kept small (5 participants?)
note: there can always be more such groups
exam on information hazard / on Bostrom’s paper on the topic
notably: some classes of hazard should definitely not be shared in that group, and this should be made explicit
questionnaire on how one handled information in the past
have a designated member share a link on an applicant’s Facebook wall with rewards for reporting antisocial behavior
pledge to treat the information with the utmost seriousness
commit to give feedback for each idea (to have a ratio of feedback / exposed person of 1)
Questions: What do you think of this idea? How can I improve this idea? Would you be interested in helping with or joining such a group?
Info-hazard buddy: ask a trusted EA friend if they want to give you feedback on possible info-hazardy ideas
warning: some info-hazard ideas (/idea categories) should NOT be thought about more. some info-hazard can be personally damaging to someone (ask for clear consent before sharing them, and consider whether it’s really useful to do so).
note: yeah I think I’m going to start with this first
I just want to document that this idea was mentioned in the book Superintelligence by Nick Bostrom.
The ideal form of collaboration for the present may therefore be one that doesnot initially require specific formalized agreements and that does not expediteadvances in machine intelligence. One proposal that fits these criteria is that wepropound an appropriate moral norm, expressing our commitment to the ideathat superintelligence should be for the common good. Such a norm could beformulated as follows:The common good principleSuperintelligence should be developed only for the benefit of all ofhumanity and in the service of widely shared ethical ideals.Establishing from an early stage that the immense potential of superintelligencebelongs to all of humanity will give more time for such a norm to becomeentrenched.
The common good principle does not preclude commercial incentives forindividuals or firms active in related areas. For example, a firm might satisfy thecall for universal sharing of the benefits of superintelligence by adopting a“windfall clause” to the effect that all profits up to some very high ceiling (say, atrillion dollars annually) would be distributed in the ordinary way to the firm’sshareholders and other legal claimants, and that only profits in excess of thethreshold would be distributed to all of humanity evenly (or otherwise accordingto universal moral criteria). Adopting such a windfall clause should besubstantially costless, any given firm being extremely unlikely ever to exceedthe stratospheric profit threshold (and such low-probability scenarios ordinarilyplaying no role in the decisions of the firm’s managers and investors). Yet itswidespread adoption would give humankind a valuable guarantee (insofar as thecommitments could be trusted) that if ever some private enterprise were to hitthe jackpot with the intelligence explosion, everybody would share in most ofthe benefits. The same idea could be applied to entities other than firms. Forexample, states could agree that if ever any one state’s GDP exceeds some veryhigh fraction (say, 90%) of world GDP, the overshoot should be distributedevenly to all.The common good principle (and particular instantiations, such as windfallclauses) could be adopted initially as a voluntary moral commitment byresponsible individuals and organizations that are active in areas related tomachine intelligence. Later, it could be endorsed by a wider set of entities andenacted into law and treaty. A vague formulation, such as the one given here,may serve well as a starting point; but it would ultimately need to be sharpenedinto a set of specific verifiable requirements.
Epistemic status: narrative driven; arm-chair thinking; contains large simplifications, suppositions, and speculations
Conclusion: I don’t know if the overall effect is selecting for or against
Humans might be good at detecting whether someone is altruistic. So from an evolutionary psychology perspective, altruism might act as a commitment mechanism for cooperativeness (but remember, we’re Adaptation-Executers, not Fitness-Maximizers). Similarly, but alternatively, similar alleles could be responsible for both cooperativeness and altruism. In either case, those seems like plausible explanations for why some amount of altruism were selected for, and would continue being selected for.
But I want to focus my answer mostly on speculating on new and future selection pressures for or against altruism. The term to search to read the literature on the topic of its historical selection pressures is ‘problem of altruism’. The above is just a quick thought, not a summary of the literature.
Narratives for increased selectiveness
It could be that we have a greater opportunity for cooperativeness than we used to. It’s now possible to cooperate with people throughout the world, and not just with your local tribe. Plus, with a winners take most financial dynamics, this could have increase benefits of having large group cooperates.
Also, a tribe of people sharing the same moral values will cooperate much more easily. A pure negative preference utilitarian giving money to another pure negative preference utilitarian knows that this money will be used for the pursuit of a shared goal. Whereas a pure egoist can’t as easily do this with other pure egoists as they all have different goals / they all want to help different people (ie. themselves, respectively). It’s much cheaper for people sharing moral values to cooperate as they don’t have to design robust contracts.
A) It could be that altruistic people think having more people in absolute or more people like them in comparison is a good thing, and so make an effort to raise more children or conceive more biological children, respectively, on average.
B) It could be that when we get technology to do advance genetic engineering in humans, subsidies or laws encourage or force selecting prosocial genes for the benefit of the common good.
Narratives for decreased selectiveness
A) It could be that altruistic people give resources away to the extent that they don’t have enough to raise (as much) children, or to raise them well enough.
B) It could be that altruistic people think it’s wrong to create new people, either on deontological or utilitarian grounds. Deontological grounds could include directly being against creating new humans, or indirectly, by being against taking welfare money to do so. From an utilitarian perspective, they could potentially be failing to see the longer-term consequences it would have from the resulting selection effect, or they could rightfully have weighted this consideration as less important (or came to the right conclusion for epistemically wrong reasons).
C) It could be that when we get technology to do advance genetic engineering in humans, people want their kids to mostly care about their family and themselves, and not care about society as much.
Related: Donating now vs later (on Causepriotization.org)
It seems likely that egoists have faster diminishing returns on marginal dollars, and also, as a consequence, are more risk averse to making a lot of money. Ie. you can only save yourself once (sort of), but there are a lot of other people to save. Although if you have fringe moral values, they might be so neglected that this isn’t as accurate.
As a potential example for altruistic people taking more risks, it seems more plausible that an egoist person being offered 100M USD to sell zir startup would take the money than an altruistic person given an altruistic person might still have low diminishing returns on money at that level.
It could also be that altruistic people, caring about people in the future, are more likely to invest their money long-term, and so gain power over a larger fraction of the economy.
It could be that philanthropists, by redistributing their wealth directly or through public goods, or by helping oppressed groups see their relative capacity to influence the world diminished as they become relatively less wealthy than those who don’t. Trivially, if they are rational, they would only do that if they expect this to be the best course of action. But their altruistic instinct might incite them for more rapid gratification, especially if they want to signal those instincts, and other mechanisms, such as Donor-Advized Funds, don’t allow them to do so as much.
On page 302-303 of “The Age of Ems”, Robin Hanson explains what ze thinks altruistic ems will donate money to and why they will choose those cause areas. Ze also says “Like people today, ems are eager to show their feelings about social and moral problems, and their allegiance to pro-social norms”, although I think ze doesn’t explain why, but it might just be a premise of the book that ems are similar to humans a priori, and just live with different incentive structures.