Indeed. Knowing what the proposal is would help here.
Kit
GWWC is not firebombing anything, happily. War crimes are obviously bad and need no counterfactual spelled out. The principle you outline does not apply to the pledge because many people (citation) don’t think the pledge is obviously bad. To engage these people in productive discourse you need to suggest at least one strategy which could be better.
Haven’t you just chosen precisely the most extreme counterfactual? Now you have to defend the view that Giving What We Can, run by very smart people who test what they’re doing, is causing net harm in expectation.
Strongly movement-affiliated EAs are not dominant in the pledge reference class.
Evidence:
2,294 people have taken the pledge already. (See current here.)
GWWC donations appear dominated by a handful of multi-millionaires who were drawn to the community by a meaningful pledge rather than first getting involved in the movement.
Are there specific people who shouldn’t take the pledge as-is other than the small minority MichaelDickens highlighted, plus the politicians proposed above?
Does anyone have specific proposals for what kind of public pledge you would prefer to make, or ask people to make? Including guesses as to who would take or not take such a pledge would be helpful for assessing whether a change would be net positive.
I don’t expect CEA to implement changes to the Giving What We Can Pledge any time soon due to the substantial momentum cost but think we should focus on actionable statements to best understand what’s going on here.
I think a stronger argument can be made in favour of the chosen marketing methods. It would probably conclude with something like ‘the huge value of a small number of extra links formed between otherwise-disjoint groups outweighed the minor weakening of cooperation standards across the community’.
Owen’s comment shows that the numbers can be big on the other side too, but valuing brands is a notoriously hard problem. In the hope that people refer back to this discussion when considering future strategies, here is an explicit estimate of one component of the value of avoiding minor harm to trust, for this specific case. It works by assuming that anyone put off from CEA simply shifts collaboration from one organisation to another, causing efficiency loss from wasting comparative advantages, not total loss. It also recognises that I made an unusually large update, and the average will be much smaller. Bracketed items are multiplied together to give an italicised item in the next line.
(present value of a GWWC pledge, $73,292 x number of pledges next year, 856) x size of CEA compared to GWWC proxied by headcount, 2.45 x (my unusually large update to engagement with CEA, 30% x perceived relative strength of other affected people’s reactions, 17.5%) x relative advantage of CEA over competition, 17% x proportion of people with negative reactions, 48%
= (value realised by GWWC next year, $62,737,952 x size of CEA compared to GWWC proxied by headcount, 2.45) x (average affected person’s shift from CEA to elsewhere, 5.25% x relative advantage of CEA over competition, 17%) x proportion of people with negative reactions, 48%
= value realised by CEA next year, $153,707,982 x (inefficiency from one affected person’s shift, 0.88% x proportion of people with negative reactions, 48%)
= (value realised by CEA next year, $153,707,982 x proportion of CEA value lost 0.42%)
= value of one year of CEA minor reputation preservation, $638,849
This model does not incorporate the effects EAG marketing can have on other EA organisations’ reputations (I suspect large), the value of not putting people off the movement entirely (unsure), or the effort required to clean up one’s reputation in the unlikely case that lasting harm is incurred (low in expectation?) To handle overoptimisation, I have tried to keep inputs conservative rather than discounting explicitly.
My guess after public and private discussion is that the approach which captures the most total value would be something like aggressive marketing (including pushing known EAs hard to tell their friends, slightly-more-than-comfortable numbers of chaser emails to applicants, and focussing almost entirely on the positives of attending) while avoiding anyone feeling deliberately misled. Obviously CEA is better placed to make this call, and I hope the broad discussion will help guide future decisions.
Hi Kerry! Congratulations again for the exceptional conference, and thanks for adding detail.
Updates I’ve made:
while in my tiny sample of 13 the emails with ‘from’ names like ‘Kit Surname via EAG’ worked out badly, it looks like you produced the most reasonable emails of that form possible without the benefit of hindsight. In answer to your question, I call this dishonest primarily because it gives the appearance of endorsement of content which I do not endorse. I would still not do this.
the deadlines at first appeared to be mainly to generate haste, but some or all had operational function. My blanket terming ‘fake deadlines’ was therefore wrong.
aside from ‘we trust Kit’s judgement’, I see that most/all other statements made in the campaign were true in a technical sense. However, I maintain that this is insufficient. ‘I was looking through our attendee database’ is a great example, precisely because the whole message implies specificity to the recipient, while it appears that the looking could have been replaced by a single filter for people who hadn’t bought tickets. Likewise for ‘ideal participant’. At the very least, I’d bin these along with the “you’re a cool person, come to EA” emails Michael mentioned.
Additional arguments against my position:
if CEA has standards substantially above average for its reference class, people might still not trust EAs to the extent I would like
maybe we don’t particularly need highly involved EAs to trust each other more, and this kind of marketing won’t materially affect what less involved people think.
I had also suspected that my concerns put me in a niche group which holds a small proportion of total relevance. I have updated away from this suspicion because the ratio of people who at the present time register a desire for greater honesty (17-27, probably nearer 17) to those who register no concern (3-5) is much higher than I had anticipated, and I suspect that forum participants are a highly relevant class for cooperation considerations.
To the other 16+ of those 17+ people: if my views are not representative of yours, it could be valuable for you to say so.
Hi Michael – please see my reply to Benito’s question for easier-to-explain suggestions. I don’t have informed views on automated flattery in general.
Hi Benito, Howie—sure, some highlights I’d recommend all EAs avoid in the future:
Sending emails ‘from’ other people. Friends I recommended received emails with ‘from’ name ‘Kit Surname via EAG’. Given that I did not create the content of these emails, this seemed somewhat creepy, and harmed outreach.
Untruths, e.g. fake deadlines, ‘we trust Kit’s judgement’, ‘I was looking through our attendee database’, etc. (My vanity fooled me for a solid few seconds, by the way!)
I can believe that whoever designed the strategy believed this the right thing to do, because the second bullet point are standard marketing tricks. However, the willingness to say things which are not true is evidence for… a willingness to say things which are not true. That’s annoying for anyone who wants to collaborate.
One counter-consideration: perhaps many donors and collaborators have a much better feel for the lines which people will or won’t cross, hence would still assume complete trustworthiness on bigger issues. Conversely, people less familiar than myself might assume this behavour pervades EA.
Honesty, because community norms
The conference itself was incredible, specifically the best weekend I can remember. Dishonest elements in the marketing beforehand seemed destructive to long-term coordination. Less important short-term effects included
I switched from ‘trust everyone at CEA except...’ to ‘distrust everyone at CEA except...’, which is a wasteful position to have to take
dodgy emails convinced approximately −1 of the 12 people I nominated to attend, and now some of my friends who were interested in EA associate it with deception
I believe we should be truly honest when feasible, but at the very least we should not lie outside of extreme circumstances.
[Clarification: I still think it’s correct to assign higher default credence to the claims of CEA staff than those of most people, just not the extremely high credence I would like to use. I used the term ‘distrust’ in an idiosyncratic fashion, which was dumb, and I apologise for not picking this up earlier. ‘Be sceptical’ would have been more appropriate.]
- 14 Jan 2017 16:18 UTC; 14 points) 's comment on Building Cooperative Epistemology (Response to “EA has a Lying Problem”, among other things) by (
- 9 Mar 2017 20:11 UTC; 2 points) 's comment on Advisory panel at CEA by (
The point I would most like to emphasise is that it’s often unclear what will happen to an asset when cost-effectiveness goes up. If you’re confident it’ll go up at that time, you buy/overweight it. If you’re confident it’ll go down at that time, you sell/underweight it. If it could go either way, this approach is weaker. Most discussion I have seen on this topic assumes that the ‘evil’ asset can be expected to move in the same direction as cost-effectiveness. Finding something with reliable covariance in either direction seems like it might be most of the challenge.
For more detail on that, here are some notes on the most valuable insights and most significant errors of the original Federal Reserve paper.
My guess is that the best suggestions from this post appear in ‘Applications outside of investment’. These do not fall prey to the abovementioned issues since the mechanisms are different to the investment case, directly exploiting the extra power one gains from being on the inside of an organisation rather than correlation/covariance.
(I might as well note that this comment represents my views on the matter, and no-one else’s, while the main post represents the views of others, and not necessarily mine.)