Hey, I was wondering if you had taken into account the consumer surplus from smoking in your estimates?
This might not be a small factor:
Many smokers report enjoying the experience of smoking.
Many people choose to smoke despite knowing about the health effects.
Newer forms of tobacco consumption, like vaping, have significantly lower health side-effects.
Rational choice is still possible in the presence of addiction—see for example Becker and Murphy (1988).
I think this is especially important because preventing people from smoking is much more coercive than most EA projects; typically we are helping people do something they either want to do anyway or are at worst indifferent (e.g. with GiveDirectly or Against Malaria Foundation). But taxing products that people want to consume (even if they might me ill-informed or the like) is quite different.
As a concrete example, the killing of Eric Garner by the NYPD, one of the causes of the Black Lives Matter Movement, was directly caused by (among other things) high tobacco taxation.
(I previously brought up this issue here)
You might be interested in this (courtesy of Gwern):
The Corporate Governance of Benedictine Abbeys: What can Stock Corporations Learn from Monasteries?
The corporate governance structure of monasteries is analyzed to derive new insights into solving agency problems of modern corporations. In the long history of monasteries, some abbots and monks lined their own pockets and monasteries were undisciplined. Monasteries developed special systems to check these excesses and therefore were able to survive for centuries. These features are studied from an economic perspective. Benedictine monasteries in Baden-Württemberg, Bavaria and German speaking Switzerland have an average lifetime of almost 500 years and only a quarter of them broke up as a result of agency problems. We argue that this is due to an appropriate governance structure, relying strongly on the intrinsic motivation of the members and on internal control mechanisms.
Every financial security requires a matching liability. Who or what owes the money at maturity? If it’s funded out of general taxation it’s a vote on whether non-holders should pay money to the holders. Holders are incentivized to give high numbers, non-holders are incentivized to give low numbers, and accurate retrospective judgements don’t seem to be relevant at all.
My guess is that the price falls rapidly to zero, like failed crypto schemes, though the game theory is not totally clear.
Increasing the power of hereditary rulers (Monarchs, House of Lords) and introducing them in other places (e.g. making Senates hereditary and replacing Presidents with Monarchs) to reduce short-term incentives by extending time in government office, and taking advantage of the high level of parent-child altruism to extend this beyond an individual ruler’s lifespan.
Some EA-ish organisations are legally part of universities. For example, FHI is part of Oxford, and CHAI is part of UC Berkeley. In both cases when I donated to these organisations in the past it was legally a restricted donation to the university, to my recollection. I assume GPI is also part of Oxford.
(To be clear, I am not arguing that you should give to these two specific organisations).
I think there are essentially two different angles here: how good is the EA community at achieving its stated purpose, and how healthy are the members.
For the first one, how many people are donating at least 10% of their labour income is an obvious test. The extent to which EA research breaks new ground, vs going round in circles, would be another.
For the second presumably many standard measures of social dysfunction would be relevant—e.g. depression, crime, drug addiction, or unemployment. Conversely, we would also care about positive indicators, like professional success, having children, good family relationships, etc. However, you would presumably want to think about selection effects (does EA attract healthy people) vs treatment effects (does EA make people healthy). If we (hypothetically) made some people so depressed they rapidly drop out, our depression stats could look good, despite this being clearly bad!
Another issue is judging whether someone is a member of the community. A survey could be unrepresentative if it doesn’t reach enough people—or if it reaches only peripherally attached people.
This is a really interesting post, thanks for writing it up.
I think I have two main models for thinking about these sorts of issues:
The accelerating view, where we have historically seen several big speed-ups in rate of change as a result of the introduction of more powerful methods of optimisation, and the introduction of human-level AGI is likely to be another. In this case the future is both potentially very valuable (because AGI will allow very rapid growth and world-optimisation) and endangered (because the default is that new optimisation forces do not respect the values or ‘values’ of previous modes.)
Recursively self-improving AGI?
The God of Straight Lines approach, where we’ll continue to see roughly 2% RGDP growth, because that is what always happens. AI will make us more productive, but not dramatically so, and at the same time previous sources of productivity growth will be exhausted, so overall trends will remain roughly intact. As such, the future is worth a lot less (perhaps we will colonise the stars, but only slowly, and growth rates won’t hit 50%/year) but also less endangered (because all progress will be incremental and slow, and humanity will remain in control). I think of this as being the epistemically modest approach.
As a result, my version of Clara thinks of AI Safety work as reducing risk in the worlds that happen to matter the most. It’s also possible that these are the worlds where we can have the most influence, if you thought that strong negative feedback mechanisms strongly limited action in the Straight Line world
Note that I was originally going to describe these as the inside and outside views, but I actually think that both have decent outside-view justifications.
Thanks for writing this, it was very interesting.
Readers might be interested in the EU’s AI Ethics guidelines, which various EA-type people tried (and apparently failed?) to influence in a productive direction.
A minor note:
the world’s largest trading bloc.
according to google...
US GDP (2018): 20.5 trillion
EU GDP (2018): 18.8 trillion
and presumably EU GDP, and influence on AI, will fall when the UK leaves. (If you use PPP I think China is bigger)
Thanks for writing this up, I thought it was very helpful.
[updated] Global development interventions are generally more effective than Climate change interventions
Previously titled “Climate change interventions are generally more effective than global development interventions”. Because of an error the conclusions have significantly changed. [old version]. I have extended the analysis and now provide a more detailed spreadsheet model below.
Wow, I have never seen someone do this before! This is really impressive, excellent job being willing to reverse your conclusions (and article). Max upvote from me.
When I was studying maths it was made clear to us that some things were obvious, but not obviously obvious. Furthermore, many things I thought were obvious were in fact not obvious, and some were not even true at all!
Thanks for sharing this here.
It strikes me that making it easier to change contracts ex post could make the long run situation worse. If we develop AGI, one agent or group is likely to become dramatically more powerful in a relatively short period of time. It seems like it would be very useful if we could be confident they would abide by agreements they made beforehand, in terms of resource sharing, not harming others, respecting their values, and so on. The whole field of AI alignment could be thought of as essentially trying to achieve this inside the AI. I was wondering if you had given any thought to this?
I think Stefan is basically correct, and perhaps we should distinguish between Disclaimers (where I largely agree with Robin’s critique) and Disclosure (which I think is very important). For example, suppose a doctor were writing an article about how Amigdelogen can treat infection.
Obviously, I’m not saying Amigdelogen is the only drug that can treat infection. Also, I’m not saying it can treat cancer. And infection is not the only problem; world hunger is bad too. Also you shouldn’t spend 100% of your money on Amigdelogen. And just because we have Amigdelogen doesn’t mean you shouldn’t be careful about washing your hands.
This is unnecessary because no reasonable person would assume you were making any of these claims. Additionally, as Robin points out, by making these disclosures you add pressure for others to make them too.
I received a $5,000 payment from the manufacturer of Amigdelogen for writing this article, and hope to impress their hot sales rep.
This is useful information, because readers would reasonably assume you were unbiased, and this lets them more accurately evaluate how much weight to put on your claim, given that as non-experts they do not have the expertise to directly evaluate the evidence.
You’re definitely right that most grant-making organisations do not make much use of such disclaimers. However, I think this mainly because it just doesn’t come up—most grantmaking occurs between people who do not know each other much socially, and are often older and married anyway. In contrast the EA community, especially in the bay area, is extremely tight socially, and also exhibits a high level of promiscuity. As such the risk for decisions being unduly influenced by personal relationships is significantly higher. For example, back in 2016 OpenPhil revealed that they had advisors living with people they were evaluating, and evaluatees in relationships with OpenPhil staff (source). OpenPhil no longer seem to publish their conflicts of interest, but I suspect similar issues still occur. Separately, I have been told that some people in the bay area community explicitly use sexual relationships to make connections and influence the flow of funds from donors to workers and projects, which seems to raise severe concerns about objectivity and bias, as well as the potential for abuse (in both directions). I would be very concerned by either of these in the private sector, and see little reason to hold EAs to a lower standard.
Donors in general are subject to a significant information asymmetry and have few defenses against improper behaviour from organisations, especially in areas where concrete outputs are scarce. Explicit declarations that specific suspect conduct has not taken place represents a minimum level of such protection.With regard your bullet points, I think a good analogy would be disclaimers in financial research. Every piece of financial research comes with multiple pages of disclaimers at the end, including a promise from the authors that the piece represents their true opinions and various sections about financial conflicts of interest. Perhaps the first analysts subject to these requirements found them intrusive—however by now they are a totally automated and unremarked-upon part of the process. I would expect the same to apply here, partly because every disclosure should ideally say the same thing: “None of the judges were in a relationship with anyone they evaluated.”Indeed, the disclosure requirements in the financial sector cover cases like these quite directly. For example the CFA’s Ethical and Professional Standards (2016):
″… requires members and candidates to fully disclose to clients, potential clients and employers all actual and potential conflicts of interest”
and from 2014:
“Members and Candidates must make full and fair disclosure of all matters that could reasonably be expected to impair their independence and objectivity or interfere with respective duties to their clients, prospective clients, and employer. Members and Candidates must ensure that such disclosures are prominent, are delivered in plain language, and communicate the relevant information effectively.
In this case, donors and potential donors to an EA organisation are the equivalent of clients and potential clients of an investment firm, and I think a personal relationship with a grantee could reasonably be expected to impair judgement.
A case I personally came across involved two flatmates who both worked for different divisions in the same bank (Research and Sales&Trading). Because the bank (rightfully) took the separation of these two functions very seriously, HR applied a lot of pressure to them and they found alternative living arrangements.Another example is lotteries, where the family members of employees are not allowed to participate at all, because their winning would risk bringing the lottery into disrepute:
In most cases the employee’s immediate family and employees of lottery suppliers are also not allowed to play. In practice, there is no way that employees could alter the outcome of a game in their favor, but lottery officials generally believe that public confidence would be damaged should an employee win a large prize. (source)
This is perhaps slightly unfair, as they did not choose the employment of their family members, but this seems to be a small cost. The number of lottery family members is very small compared to the lottery-ticket-buying public, and there are other forms of gambling open to them. And the costs here should be smaller still, as all I am suggesting is disclosure, a much milder policy than prohibition.I did appreciate that the fund’s most recent write-up does take note of potential conflicts of interest, along with a wealth of other details. I could not find the sort of conflict of interest policy you suggested on their website however.
Thanks for writing this up. Impressive and super-informative as ever. Especially with Oliver I feel like I get a lot of good insight into your thought process.
This post has been shared within the organisation I work for and I think could do very large damage to the reputation of EA within my org.
Would you mind sharing, at least in general terms, which organisation you work for? I confess that if I knew I have forgotten.
Interesting work, thanks for doing the research. I really appreciate these posts on new topics I had no idea existed.
Wow, this is fascinating speculation, thanks for posting.
The section on pain varying with the social environment was especially interesting. It reminded me of the (common but not uncontroversial) parenting strategy whereby babies are left to cry at night, so as to avoid positively reinforcing crying and instead train them to sleep unaided.
Would it suggest that exhortations to ‘stop being a wuss’ were actually effective? The nearby people are effectively precommitting to not be moved by visible suffering, which might reduce the incentive for the victim to experience pain.
This is so adorable! I especially like when she volunteered to take over your job.